Reports of 34059
DGS (General)
takahiro.yamamoto - 2:42 Tuesday 07 April 2026 (36706) Print this report
Comment to Migrating scripts from k1script1 to k1script0. (36519)
All EPICS IOCs were also moved to k1script0.
Now all resident scripts and services were migrated.
I will start to migrate on-demand scripts tomorrow.

IOCs built as standard ones
cam_ioc, das0_ioc, and weather_ioc running on k1script1 as Debian10 system had been built as a standard IOC under the EPICS-3.14.5 environment without any special configuration. Linked shared objects by these applications didn't depends on detailed library versions and they was able to be launched on k1script0 as Debian13 only with LD_LIBRARY_PATH for EPICS-3.14.5. It's a troublesome to be required LD_LIBRARY_PATH to launch them every time. So I rewrote RUNPATH injected in the ELF binary files by
> patchelf --set-rpath /kagra/apps/epics-3.14.12.3_long-deb13/base-3.14.12.3/lib/linux-x86_64 /path/to/AppName
They can be now launched without LD_LIBRARY_PATH and ELF binaries for Debian10 is archived as target/AppDir/bin/linux-x86_64/AppName-deb10.

IOCs built including special dbd definition with support module
cryocon_ioc, sys_ioc, and vac_ioc had been built with a special dbd definition with support module under the EPICS-3.14.5 environment. Because support module depends on shared object that version is not available on Debina13 system. So rewriting RUNPATH doesn't work well for this case. But these IOC didn't use a function of special dbd at all. So I rebuilt them as a starndar IOC under the EPICS-3.14.5 environment. An old build for Debian10 is archived as target/archive/IOC/AppDir-deb10

IOCs built under different EPICS version environment
cryhx_ioc had been built as a standard IOC under the environment of different EPICS version from /kagra/apps/epics-***. It seemed to be built with EPICS installed in the system region from Debian10 basic repository because of (probably) forgetting to load /kagra/apps/etc/epics-user-env***.sh before executing a build command. Anyway, it doesn't have a compatibility with an environment of Debian13 at all. So I started over from the configure step, not just the build. An old configured application for Debian10 is archived as target/archive/IOC/k1cryhx.
DGS (General)
takahiro.yamamoto - 0:58 Tuesday 07 April 2026 (36705) Print this report
Replacement of wrong combination of timing fiber for V2 IO chassis

[Ikeda, YamaT]
 

Abstract

In recent these days, we replaced IO chassis from V1 to V2.
(K1TEST0 in klog#33560, K1IX1 in klog#36572, K1IY1 in klog#36625, K1EX0 in klog#36654, and K1EY0 in klog#36692).
Real-time models on these front-ends seemed to work well.
But they had a problem that a IRIG-B timing took so long time to be stable when the cold boot of the front-end computer was performed.

Last weekend, we realized that this might be caused by a mix-up of different fiber-optic cables.
So we replaced fiber-optic cables of K1TEST0, K1IX1, K1IY1, and K1EX0 to the correct combination.
After then, the issue of the timing synchronization taking too long no longer occurred.

We had no enough time to go to Y-end today, so we will do same thing for K1EY0 tomorrow.
 

Details

The V2 IO chassis uses a new cable connection standard, and some of its specifications are different from those of the V1 IO chassis. The most significant difference is that the data transfer cable between the front-end computers and the IO chassis has been changed from custom-made HIB optical fiber to standard MTP fiber (OM3). Thanks to this change, we are no longer constrained by the limited availability of HIB cables and we can provide additional IO chassis as spare ones and/or for new instruments in future e.g. filter cavity.

In addition to this change, timing fiber cable in the IO chassis was also changed to the OM3 cable. Strictly speaking, the SFP port of the timing slave was directly accessible from the front panel on V1 IO chassis and any SFP module could be attached, so there were no internal cables. In other words, it is more accurate to say that the interface was changed rather than the cable standard.

Until now, OM1 cables have been installed for multimode optical fiber, not just for timing signals. (Since single-mode fiber, OS1, is used for long-distance connections, this is slightly off-topic.) So the most of timing slaves in the V1 IO chassis were connected to the timing fanout with OM1 cable and we had reused them for V2 IO chassis when we had replaced IO chassis from V1 to V2.

Although the real-time models themselves were running well also in this configuration, there remained a mystery as to why it took several hours for the IRIG-B timing to stabilize after a cold boot of the front-end computer. Figures 1 and 2 show the fluctuations in the IRIG-B value following a cold boot on EY0 and IX1, respectively. Both of them shows that it takes o(hours) time to stabilize the IRIG-B value within normal range (5-20us). And also, for EY0 (Fig.1), it's reproduced in 3 times trial.

The IRIG-B value is not the primary source of timing synchronization but merely serve as a cross-check. However, if they fall outside the normal range, RFM and Dolphin communication will fail. As a result, a global control cannot be done. Even if it took so long time to stabilize IRIB-B value, it finally went to normal range. So we was able to resume global control by waiting this fantastic time. But waiting time as o(hours) was serious from the view point of effective downtime of the digital system for keeping commissioning time and also observing time.

At first, we didn’t realize the relation between the IRIG-B issue and fiber cables standard, but we finally noticed that OM1 and OM3 cables have different core diameters (62.5 μm and 50 μm, respectively) and that mixing them can cause speed reductions and instability especially for the long distance connection in Ethernet case. To be honest, I wasn’t sure if the same discussion applied to Timing synchronization, but We decided to try unifying the fiber cable as OM3.

After replacing the fiber cable, we didn't need to wait that IRIG-B timing was stabilized. And it's not reproduced in a few times power-cycle. After replacing the fiber cable, we didn't need to wait that IRIG-B timing was stabilized. And it didn't appear also in a few times power-cycle. So we concluded that the IRIG-B issue was induced by the mismatch of the optical fiber standard. Today, we was able to complete a replacement work for K1TEST0, K1IX1, K1IY1, and K1EX0. Reaming K1EY0 will be processed tomorrow.

Images attached to this report
VIS (SRM)
ryutaro.takahashi - 22:00 Monday 06 April 2026 (36704) Print this report
Comment to Investigation of BF GAS (36675)

[Washimi, Takahashi]

We recovered the flags touching the OSEM bodies in the IM. We rotated the IM with the yaw picomotor, aligning the TM with the F0 yaw FR. The H3 flag was released easily (Pic. 1 and 2), then the H1 (Pic. 3) and V2 (Pic. 4) flags touched the OSEM bobies. Though we tried to find good relative geometry between the IM and the IRM, we could not find it. The strategy was changed; we adjusted the positions of the OSEM bodies, moving the base panel of the OSEMs. The axial positions of the H1, H3, and V1 OSEMs were also adjusted. We rotated the H2, H3, and V3 flags to be perpendicular to the LED-PD line in the OSEM.

Images attached to this comment
DGS (General)
satoru.ikeda - 18:01 Monday 06 April 2026 (36703) Print this report
Updated k1vissrmt model file; removed unnecessary DAQ channels

We have updated the k1vissrmt model file and removed unnecessary DAQ channels.

Requests from Ushiba-san and R.Takahashi-san
Related to K-Log#36664
[K1SRM]
 model: k1vissrmt
 detail: Change the input ADC channels for ACC_H1, 2, and 3 from 20, 21, and 22 to 21, 22, and 23

Request from YamaT-san
 We have commented out K1EDCU_TESTIOC.ini from the DAQ master file.
 

Images attached to this report
Non-image files attached to this report
MIF (General)
hirose.chiaki - 16:24 Monday 06 April 2026 (36702) Print this report
Entered the mine to draw up the layout diagram for the POS optical table

[Nakagaki, Tanaka, Ushiba, Hirose]

We plan to set up a new sub-laser and optics on the POS table in order to perform absolute measurements of PRCL and SRCL. Therefore, it is necessary to draw up a layout for the POS table such as JGW-T1909623. Today's work confirmed the current configuration of the optics. 

  • There was an optical setup for the GRY lock (green optical fibre from the PSL room, a mirror for Fibre Noise Cancellation, a PD for Phase Noise Cancellation, and an RFPD for PDH signal detection) that performed a similar function to the GRX lock on the POP table.
  • Furthermore, for IR light, there were PDs for S-polarisation and P-polarisation for dark matter measurements.

We have asked Nakagaki-san to draw up detailed diagrams of the optics. This will be posted in JGWdoc or Klog.

CAL (Gcal general)
dan.chen - 15:37 Monday 06 April 2026 (36701) Print this report
Ncal pylon relocation

With Hayakawa-san, SawadaH-san, TakahashiM-san, Yamaguchi-san, Ohmae-san

The Ncal pylons were moved from the X-end parking area to its original location under the beam pipe inside the X-end experimental room.

Before the relocation, the following preparations were performed:

  • The protective cover used during the transportation through the 3 km tunnel was removed.
  • One of the pallets was found to be dirty, so it was replaced with a clean one available inside the tunnel.

Pictures: link

VIS (SRM)
ryutaro.takahashi - 10:01 Saturday 04 April 2026 (36700) Print this report
Comment to Investigation on FLDACC (36681)

I measured the spectra of the FLDACCs. They look fine. The displacement noise was less than 0.1um/rHz at 0.1Hz.

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 9:54 Saturday 04 April 2026 (36699) Print this report
Comment to Health check (36669)

I checked the IP transfer functions after removing the 1kg ballast ring.  The resonant frequencies of the IP were changed from 51 to 55mHz for L and from 59 to 63mHz for T. They are still lower than the reference (63mHz for L, 70mHz for T).

Images attached to this comment
DGS (General)
takahiro.yamamoto - 3:15 Saturday 04 April 2026 (36698) Print this report
Comment to Deployment of V2 IO-chassis and the front-end computer for EY0 (36692)

[Nakagaki, Ikeda, YamaT]

We could solve the issue of missing PCIe card by replacing DAC#2.
After then ADC and DAC noise were measured for all channels and finally k1iy0 came back online.

Note that PEM speaker output cable is now unplugged from AI chassis because timing synchronization cannot be recovered with plugging speaker.
Please connect it when it'll be used.
According to experience on MCF, after the timing synchronization is established once, it can be connected.
But it sometimes makes an OS hang-up, so it's better to be unplugged again after using.

-----
Solving missed PCIe card issue
Today, we tried to reproducibility of yesterday's situation at first by removing BO card and then swapping DAC#0 and DAC#1. But OS couldn't find the DAC#1. So we concluded that removing BO card is not an ensured way to operate two DACs. Next, we replaced DAC#1 from S1809372 to S2516731. Then, two DACs could be operated even if BO card was also installed and reproducibility was also verified by power cycles in several times. We haven't identified a reason why S1809372 didn't work properly. But we doubt two cases now and will check them on the test bench.
1. That DAC was just damaged by the issue of speaker.
2. LIGO firmware wasn't applied to that DAC. (It was not purchased by DGS. So we don't know a detail of it.)

Noise measurement
After all PCIe cards were operated properly, noise level was measured for all ADC/DAC channels as shown in attached figures. Measurement configurations were completely same as ones in K1IX1 (klog#36586), K1IY1 (klog#36641), and K1EX0 (klog#36663). Measurement files were stored in /users/DGS/measurements/ADC/K1EY0/2026/0403_V2_IO_CHASSIS/.

RFM board replacement
When we were cleaning all stuffs up at the end of our work, we noticed RFM connection made an error bit. This was because a usage of wrong SFP modules single-mode or multi-mode. At the 1st floor of end stations, RFM fibers were laid by single mode fiber. On the other hand, the new V4 front-end computer which had been used as K1IY0 in the past had a multi-mode RFM board. So we moved single-mode RFM board from the old V3 front-end computer to the new V4 front-end computer and RFM connection came back online and GDS_TP screen showed all green. As the result, a remaining spare RFM board is multi-mode one.

Remaining concern
Only remaining concern is that it takes a few hours for synchronizing IRIG-B when a cold boot of the front-end computer is performed. In the past, similar things often occurred when down time became so long and a room temperature changed largely maybe because it took long time to be stable in the temperature of a crystal oscillator of timing slave. On the other hand, in this time, it takes long time even when a down time is only a few seconds. Such a thing wasn't reproduced in the test bench. So we have no idea to solve this issue now.

Fortunately, because this issue doesn't occur when the front-end computer is rebooted (there seems to be some difference between reboot and cold boot), we don't face this issue not so frequently. But reproducing and solving it on test bench is required from the view point of reducing a down time. If it's caused by the environment in the mine, we may need an additional test in the mine in stead of using test bench.

Images attached to this comment
DGS (General)
takahiro.yamamoto - 20:43 Friday 03 April 2026 (36697) Print this report
New TCam cameras were removed from the camera network
The two candidate devices for the new TCam have completed testing on the GigE camera network and have been retrieved.
These devices will be used for an operation test with the spare TCam server in Mozumi.

-----
A detail of tests in the GigE camera network can be found in klog#36390 for a2A5328-4gmPRO and klog#36527 for acA4112-8gm. Based on these test results, we ultimately concluded that the new TCam should be operated using a dedicated TCam server and a dedicated local network, rather than the existing camera network. So retrieved devices will be used at Mozumi for the test operation on the spare TCam server.
VIS (SRM)
ryutaro.takahashi - 18:59 Friday 03 April 2026 (36696) Print this report
Comment to Investigation on FLDACC (36681)

[Washimi, Takahashi]

We repaired the broken Lemo cable for the H3 FLDACC. Though the cable was connected to the H3 FLDACC again, there was still no signal. We checked the in-vacuum PTFE cable too. The connections of some pins (#1, #6, #7) were unstable or failed. We repaired the PTFE cable. We could confirm the signal and the actuation in the H3 FLDACC. After that, we adjusted the tilt of the all FLDACCs so that the LVDT signals can fluctuate around zero.

Additionally, we fixed the PTFE cable for the H2 FLDACC with a peek tie (Pic.) and removed the 1kg ballast ring from the IP.

Images attached to this comment
VIS (SRM)
tomotada.akutsu - 0:09 Friday 03 April 2026 (36695) Print this report
Comment to Investigation of BF GAS (36675)

By the way, we should have a jig, which was made at the beginning of 2024, to peacefuly rotate the flag so that the tip direction can align to the vertical direction.

VIS (SRM)
ryutaro.takahashi - 23:13 Thursday 02 April 2026 (36694) Print this report
Comment to Investigation of BF GAS (36675)

[Ushiba, Washimi, Takahashi]

We found that the H1 (Pic. 1) and H3 (Pic. 3) OSEM flags on the IM are touching the OSEM bodies on the IRM. The H2 (Pic. 2) OSEM flag was also close to the OSEM body.

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 22:58 Thursday 02 April 2026 (36693) Print this report
Comment to Investigation on FLDACC (36681)

[Ushiba, Washimi, Takahashi]

We went to investigate the H1 and H3 FLDACCs. We opened the actuator housing and checked the movable range with the LVDT. It was from +80 (actuator side) to +1700 (LVDT side) in H1 and from -2500 to 750 in H3. When we removed the actuator yoke (Pic. 1), the range of the actuator side extended to -600 in H1 (Pic. 2). The actuator coil (Pic. 3) may be touching the yoke. We inserted the 0.5mm shim between the coil bobbin and the folded pendulum body in H1. The range did not change in H3. The folded pendulum has gone to an unstable mode in both H1 and H2. We removed the counterweight of  25g for H1 and 19g for H3. The transfer functions of these two folded pendulums were almost consistent and showed the same resonant frequency of 0.3Hz (Pic. 4). Before closing the top chamber, the Lemo cable for the actuator had broken in H3. Pic. 5 shows the displacement spectra of the H1 and H2 FLDACCs. The sensitivity of the H1 FLDACC was improved.

Images attached to this comment
DGS (General)
takahiro.yamamoto - 21:37 Thursday 02 April 2026 (36692) Print this report
Deployment of V2 IO-chassis and the front-end computer for EY0

[Nakagaki, Ikeda, YamaT]

We replaced V3 front-end computer at U13-14 of EY0 rack and V1 IO chassis at U18-21 of EY0 rack to V4 front-end computer and V2 IO chassis, respectively.
Real-time models could be launched but the 2nd DAC cannot be found from OS by some reason.
Today, we had no enough time to investigate what happen, so we will continue this work tomorrow.

-----
Power distribution trouble
Because it's hard to transport many large stuffs to EYC area via EYA booth, two V2 IO chassis (for EYV1 and EY0) and a V4 front-end server (for EY0) were moved to EYC area via mine entrance at Mozumi. V1 IO chassis at U18-21 of EY0 rack was replaced to one of these V2 IO chassis (S2416123). But Adnaco boards in IO chassis weren't driven even when main power switch was enabled. DC24V was surely supplied at the power supply board (ATX-M4), so it seems to be a malfunction of downstream of the power distribution board. According to Ikeda-san, he had never turned on this IO chassis before, so it may be an initial defect. Anyway we gave up to use it and used another IO chassis (S2416124) which would be used for EYV1 in future.

Timing lost issue maybe due to PEM speaker system
By using S2416124 with V4 front-end computer, real-time models could be launched. But we found soon that the timing synchronization didn't work because ADC#0_CH31 for timing duotone signal was contaminated by constant ~-4000ct signal. Though we tried to reboot models and/or the front-end computer in several times, it wasn't recovered.

During these trials, we noticed that the PEM large speaker output laud sounds when some stuffs turned ON and OFF. And also, when AI chassis connected with PEM speaker was turned OFF, PEM speaker output continuous sounds. Finally timing synchronization came back by unplugging DB9 cable of speaker output from AI chassis. Even when that AI chassis turned OFF, timing signal still lost if DB9 cable was kept a connection, so GND connection (shell or #5 pin of DB9) between the digital system and the speaker system seemed to be a cause of this issue.

By the way, a same speaker system also made a hang-up trouble on MCF rack in several times (klog#25343, klog#29433, klog#33565, etc.). It might be better to reconsider the design of how the speaker system is connected to the digital system. If this affects the timing, while a rough coincidence analysis might be possible, a coherence analysis is no longer reliable.

Missing PCIe card issue
After recovering timing synchronization, we noticed that the 2nd DAC (DAC#1) wasn't found by the IOP model. According to lspci command, OS also couldn't found the DAC#1. So it seemed an issue on hardware of common Linux not an issue on LIGO real-time software. Though EY0 had 3 ADC, 2 DAC, 1 BIO, and 1 BO, this configuration hadn't been tested in the test bench. So we doubted the card combination issue that was often seen with V1 IO chassis and removed 1 BO card in order to make a same configuration as EX0 that was successful in klog#36654. But DAC#1 was still missed.

As the next, to know which DAC card was assigned as DAC#1, we swapped two DAC cards (but we noticed soon we cant know by this method) and restarted the front-end computer. At that time, ADC#0_CH#30 shows a some response that is assigned for duotone loop back when DAC duotone was enabled. After then, to check the duotone loop back in the original configuration, we restored two swapped DACs and restarted. Then DAC#1 was found by OS and the IOP model though I have no idea why it came back.

Because DAC#1 came back, we also restored the removed BO card to back to the original configuration, then DAC#1 was missed again... We had no enough time to continue this investigation today. So this work will be continued tomorrow.

Thoughts
Swapping DAC and restoring BO were done without any SCSI and DB37 pin connection. So this issue is now related to only IO chassis or PCIe cards not related to the connection of circuits.

All used PCIe cards were just moved from V1 IO chassis to V2 one today. So if we didn't break them in today's work, PCIe cards themselves should be no problem. Only concern is that DAC#1 was damaged by speaker issue. We plan to take a spare DAC card tomorrow just in case.

If the configuration of EY0 (3 ADC, 2 DAC, 1 BIO, and 1 BO) has a problem, removing BO permanently may become a solution. It's a same config. of EX0, so it's a reasonable solution though the k1caley model must be updated. IX1 and IY1 have 3ADC, 3DAC, 5 BIO, and 1 BO, so using BO itself should be no problem. On the other hand, it's not so surprising there is a card combination issue (a number of each card type, used slot etc.) with V2 IO chassis because we faced such kind of issue with V1 IO chassis in the past. Of course, it may be a malfunction of used BO card, so a check with spare BO card is also necessary tomorrow.

If S2416124 has also a problem (accroding to Ikeda-san, it also hadn't been used in the test bench, so ), we must consider to restore to V1 IO chassis and V3 front-end computer. In this situation, schedule to use Mozumi entrance again (bring back to problematic stuffs and take new stuffs to EY again) is also serious concern.

Since today's results revealed insufficient tests in the test bench, we will likely need to reconsider and accelerate the operation of the test bench.

Comments to this report:
takahiro.yamamoto - 3:15 Saturday 04 April 2026 (36698) Print this report

[Nakagaki, Ikeda, YamaT]

We could solve the issue of missing PCIe card by replacing DAC#2.
After then ADC and DAC noise were measured for all channels and finally k1iy0 came back online.

Note that PEM speaker output cable is now unplugged from AI chassis because timing synchronization cannot be recovered with plugging speaker.
Please connect it when it'll be used.
According to experience on MCF, after the timing synchronization is established once, it can be connected.
But it sometimes makes an OS hang-up, so it's better to be unplugged again after using.

-----
Solving missed PCIe card issue
Today, we tried to reproducibility of yesterday's situation at first by removing BO card and then swapping DAC#0 and DAC#1. But OS couldn't find the DAC#1. So we concluded that removing BO card is not an ensured way to operate two DACs. Next, we replaced DAC#1 from S1809372 to S2516731. Then, two DACs could be operated even if BO card was also installed and reproducibility was also verified by power cycles in several times. We haven't identified a reason why S1809372 didn't work properly. But we doubt two cases now and will check them on the test bench.
1. That DAC was just damaged by the issue of speaker.
2. LIGO firmware wasn't applied to that DAC. (It was not purchased by DGS. So we don't know a detail of it.)

Noise measurement
After all PCIe cards were operated properly, noise level was measured for all ADC/DAC channels as shown in attached figures. Measurement configurations were completely same as ones in K1IX1 (klog#36586), K1IY1 (klog#36641), and K1EX0 (klog#36663). Measurement files were stored in /users/DGS/measurements/ADC/K1EY0/2026/0403_V2_IO_CHASSIS/.

RFM board replacement
When we were cleaning all stuffs up at the end of our work, we noticed RFM connection made an error bit. This was because a usage of wrong SFP modules single-mode or multi-mode. At the 1st floor of end stations, RFM fibers were laid by single mode fiber. On the other hand, the new V4 front-end computer which had been used as K1IY0 in the past had a multi-mode RFM board. So we moved single-mode RFM board from the old V3 front-end computer to the new V4 front-end computer and RFM connection came back online and GDS_TP screen showed all green. As the result, a remaining spare RFM board is multi-mode one.

Remaining concern
Only remaining concern is that it takes a few hours for synchronizing IRIG-B when a cold boot of the front-end computer is performed. In the past, similar things often occurred when down time became so long and a room temperature changed largely maybe because it took long time to be stable in the temperature of a crystal oscillator of timing slave. On the other hand, in this time, it takes long time even when a down time is only a few seconds. Such a thing wasn't reproduced in the test bench. So we have no idea to solve this issue now.

Fortunately, because this issue doesn't occur when the front-end computer is rebooted (there seems to be some difference between reboot and cold boot), we don't face this issue not so frequently. But reproducing and solving it on test bench is required from the view point of reducing a down time. If it's caused by the environment in the mine, we may need an additional test in the mine in stead of using test bench.

Images attached to this comment
VAC (Valves & Pumps)
nobuhiro.kimura - 12:23 Thursday 02 April 2026 (36691) Print this report
Comment to Maintenance Work for the Y-10 vacuum pump unit on the Y-arm (36674)

[Kimura and Yasui]
 It was confirmed that the water level in the tank of the cooling water system attached to the X-10 vacuum pump unit on the X-arm had decreased to approximately one-third of its normal level.
Consequently, on April 1, we performed a complete tank replacement, which also served to refill the water.
 After the replacement work, we left the cooling water system cover off to monitor the rate of water loss.

During the routine inspection on April 3, we will reinstall the cooling water system cover after confirming that there are no further water leaks.

Images attached to this comment
VIS (SRM)
tatsuki.washimi - 10:56 Thursday 02 April 2026 (36690) Print this report
Comment to Investigation of BF GAS (36675)

I measured L & R for each IM H coils at the feedthroughs (with cross cables), pin 2-7, with the LCR meter at 100Hz

  H1 H2 H3
L [mH] 8.71 8.71 8.67
R [Ω] 19.8 19.8 19.6

 

DGS (General)
takahiro.yamamoto - 17:37 Wednesday 01 April 2026 (36689) Print this report
Comment to Migrating scripts from k1script1 to k1script0. (36519)
All remaining cron-scripts were moved to k1script0.

Note
Though operation test of following scripts on k1script0 were done, following scripts are currently commented out because these scripts hang up due to run-time errors that are caused by unreachable servers to get information.
  1. /opt/rtcds/userapps/release/pem/common/scripts/weewx_sync.sh for Atotsu
  2. /opt/rtcds/userapps/release/pem/k1/scripts/snow/K1PEM_SNOW_DAQ.py
1. is related to klog#36587. This script is working well for Mozumi Weather Station, so after recovering Atotsu Weather Station server will be restored. I couldn't find the current situation about 2. Anyway, RaspberryPi for Mozumi snow monitor is now unreachable via Ethernet and Atotsu snow monitor seems to be unreachable from RaspberryPi via Serial. These situation makes a runtime error of this script due to incomplete data files.

/kagra/bin/SYS/weather.py had also made run-time errors due to the issue of Atotsu Weather Station (klog#36587). Because of a lack of error handling, a process for Mozumi Weather Station had also hung up since last February though Mozumi Weather Station itself is alive. So I added an error handling code to that script to operate in normal for alive Weather Station even if a part of Weather Station is down.

Remaining task
Some Supervisor-scripts including EPICS IOCs still remain on k1script1. They will be moved on the next maintenance day.
VIS (SRM)
takafumi.ushiba - 12:53 Wednesday 01 April 2026 (36681) Print this report
Investigation on FLDACC

Summary:

Not only H1 but also H3 FLDACC does't seem healthy.
It is very suspicious that they are rubbing somewhere.

What I did:

To investigate if the pendulum itself is healthy or not, I performed the DC response ceck for FLDACCs.
Procedure is as folows:

1. Engage FLDACC feedback controls so that pendulum is aligned with respect to the LVDT.
2. Turn off INPUT of FLDACCSERVO filter banks to hold the outputs.
3. Change offsets as +100, 0, -100, and 0 with the ramp time of 60 seconds.

Figure 1, 2 and 3 show the result of H1, H2, and H3 FLDACC signals, respectively.
H1 FLDACC signals moves smoothly in the positive direction while not in negative direction.
H2 FLDACC signals moves smoothly in both directions.
H3 FLDACC signals moves smoothly in the positive direction while not in negative direction as well as H1.
So, only FLDACC H1 seems healthy.

Since there is large hysteresis in H1 and H3 FLDACCs, it is suspicious that these proof masses are rubbing somewhere.

Images attached to this report
Comments to this report:
ryutaro.takahashi - 22:58 Thursday 02 April 2026 (36693) Print this report

[Ushiba, Washimi, Takahashi]

We went to investigate the H1 and H3 FLDACCs. We opened the actuator housing and checked the movable range with the LVDT. It was from +80 (actuator side) to +1700 (LVDT side) in H1 and from -2500 to 750 in H3. When we removed the actuator yoke (Pic. 1), the range of the actuator side extended to -600 in H1 (Pic. 2). The actuator coil (Pic. 3) may be touching the yoke. We inserted the 0.5mm shim between the coil bobbin and the folded pendulum body in H1. The range did not change in H3. The folded pendulum has gone to an unstable mode in both H1 and H2. We removed the counterweight of  25g for H1 and 19g for H3. The transfer functions of these two folded pendulums were almost consistent and showed the same resonant frequency of 0.3Hz (Pic. 4). Before closing the top chamber, the Lemo cable for the actuator had broken in H3. Pic. 5 shows the displacement spectra of the H1 and H2 FLDACCs. The sensitivity of the H1 FLDACC was improved.

Images attached to this comment
ryutaro.takahashi - 18:59 Friday 03 April 2026 (36696) Print this report

[Washimi, Takahashi]

We repaired the broken Lemo cable for the H3 FLDACC. Though the cable was connected to the H3 FLDACC again, there was still no signal. We checked the in-vacuum PTFE cable too. The connections of some pins (#1, #6, #7) were unstable or failed. We repaired the PTFE cable. We could confirm the signal and the actuation in the H3 FLDACC. After that, we adjusted the tilt of the all FLDACCs so that the LVDT signals can fluctuate around zero.

Additionally, we fixed the PTFE cable for the H2 FLDACC with a peek tie (Pic.) and removed the 1kg ballast ring from the IP.

Images attached to this comment
ryutaro.takahashi - 10:01 Saturday 04 April 2026 (36700) Print this report

I measured the spectra of the FLDACCs. They look fine. The displacement noise was less than 0.1um/rHz at 0.1Hz.

Images attached to this comment
VIS (General)
takafumi.ushiba - 12:52 Wednesday 01 April 2026 (36688) Print this report
Comment to New state for FLDACC control on Type-B guardian (36685)

Thanks to the guardian modification, FLDACCs' local controls can be implemented into the guardian.
They are now automatically engaged when SRM is going to the READY state.

VIS (SRM)
takaaki.yokozawa - 12:28 Wednesday 01 April 2026 (36687) Print this report
Comment to Investigation of BF GAS (36675)
[YokozaWashimi, Ushiba(remote)]

> The actuators in the IM H1/2/3 OSEMs are not working.
We checked the signals outside of the SRM chamber.

By adding the 100cnt offset to
K1:VIS-SRM_IM_COILOUTF_H1_EXC
and checked the offset voltage by the voltage monitor.

OK : AI output
OK : input of the coil driver
OK : output of the coil driver
OK : Vmon of the coil driver

After confirmed the pay tripped state
(After this measurement, the state of the SRM guardian was pay tripped, Ushiba-san noticed it and fix them)
(When we turned on/off the coil driver, the software WD may set the tripped state, Ushiba-san changed the threshold of the software WD)
OK : H1 signal of the output of the satellite box .

So, we suspected that the inside of the camber, there would be some trouble.
VIS (General)
takahiro.yamamoto - 11:46 Wednesday 01 April 2026 (36685) Print this report
New state for FLDACC control on Type-B guardian
I added new states to engage and disengage FLDACC control on Type-B guardian.
These new states work same as ones on Type-A guardian.

-----
There are no longer any differences between Type-A and Type-B in terms of state lists and state edges.
As a result, {BS,SR2,SR3,SRM}.py are now derived from TYPEA.py.
{PR2,PR3,PRM}.py continues to be derived from TYPEB.py.
Naming convention of modules might be better to be reconsidered.
Comments to this report:
takafumi.ushiba - 12:52 Wednesday 01 April 2026 (36688) Print this report

Thanks to the guardian modification, FLDACCs' local controls can be implemented into the guardian.
They are now automatically engaged when SRM is going to the READY state.

CAL (Gcal general)
dan.chen - 9:49 Wednesday 01 April 2026 (36680) Print this report
Comment to Bring NCal pylons back (36661)
VIS (SRM)
ryutaro.takahashi - 8:41 Wednesday 01 April 2026 (36679) Print this report
Comment to Investigation of BF GAS (36675)

The actuators in the IM H1/2/3 OSEMs are not working.

VIS (SRM)
ryutaro.takahashi - 20:46 Tuesday 31 March 2026 (36677) Print this report
Comment to Investigation of BF GAS (36675)

[Washimi, Takahashi]

We went to the recovery work.

  • VAC staff opened the top chamber.
  • We checked the suspension. We found the IRM is touching the EQ stop at X-Y+ (Pic. 1).
  • We screwed out the stopper nuts for the BF keystone to the top of the stud bolts and glued them with TRA-BOND (Pics. 2, 3, 4).
  • We tried to align the TM using the Oplev. We found the F0 yaw FR was not working during the alignment. The pushing bolt head has been detached from the rotor bar (Pic. 5). The rotor was rotated by hand. Finally the TM was aligned.
  • We added the stopper nuts to make the double nuts for the F0 keystone (Pics. 6, 7, 8).
  • VAC staff closed the top chamber temporarily.
Images attached to this comment
Search Help
×

Warning

×