Reports of 34221
DGS (General)
takahiro.yamamoto - 20:52 Monday 11 May 2026 (36874) Print this report
MTP fiber laying at EXV for V2 IO chassis
[Ikeda, Nakagaki, YamaT]

As the initial preparation of the IO chassis replacement, we laid the MTP-LC breakout cable from EX1 rack at the front-room of the 2nd floor to EXV1 rack.
The new cable is housed in the existing corrugated tube and will be used when IO chassis of K1EX1 will be replaced as V2 one.

V2 IO chassis for K1EX1 was also transported to the end station.
We will replace the IO chassis for K1EX1 on the maintenance day in near future.
DGS (General)
takahiro.yamamoto - 20:42 Monday 11 May 2026 (36872) Print this report
Comment to Deployment of V2 IO-chassis and the front-end computer for EX0 (36654)
[Ikeda, Nakagaki, YamaT]

Abstract

All the necessary items had been gathered just before holidays, so K1EX0 front-end computer was moved to EX1 rack at the 2nd floor.
Though some equipment which is no longer necessary such as Timing Fanout are still in the EX0 rack, replacement of EX0 IO chassis itself was completed.
Clean-up of unnecessary equipment will be able to be done not on the maintenance day because we don't need to stop anything for this work.

Because an IRIG-B issue (klog#36705, klog#36819) was reproduced on K1EX0, IRIG-B card was also replaced as one which has no problem on the test stand.
After then, the issue was removed, so we kept to use the new IRIG-B card for K1EX0.
Problematic one will be checked again on the test stand.

Details

Although the IO chassis had been replaced in klog#36654, the move of the front-end computer to the second floor had been postponed due to a shortage of necessary cables. Because the necessary cables were obtained just before holidays, we moved the front-end computer from U13-14 of EX0 rack around the EXC chamber to the U22-23 of EX1 rack at the 2nd floor. There were no major issues with this work, and we didn't encounter any problems around PCIe cards such as ADC and DAC. With this work completed, the replacement of the EX0 I/O chassis is now completed, and the equipment in the EX0 rack such as the timing fanout, IRIG-B chassis, KVM, and 12V DC power supply unit is no longer needed. However, this unnecessary equipment is still left at EX0. Since it can be removed without disrupting the digital system, we plan to remove it at an convenient time.

After moving the computer to the second floor, the IRIG-B issue reproduced during the first startup, and neither a reboot nor a cold boot resolved the problem. Though we have not identified a clear cause or solution for this issue yet, we replaced the IRIG-B card with one that has no problem on the test bench, and the issue was removed. So we kept to use the new IRIG-B card for K1EX0. We cannot conclude this issue is caused by an individual difference of IRIG-B card from today's result yet. If so, a problematic cards on another front-end computers also must be replaced (at least 3 cards). We faced this issue in 3 of 4 front-ends that we worked on recently. Though I'm not sure it comes from a problem rate or just bad luck, it’s possible that a certain percentage of the cards currently in use at the coner station have the same problem, which may require replacements for more units.

IOO (IMC)
satoru.ikeda - 17:34 Monday 11 May 2026 (36871) Print this report
Comment to IMC cannot be locked 260511 (36869)
Error: Permission denied, please try again.
Permission denied, please try again.

A permission error occurred during the callback process from the HWP control PC to k1script.

k1script0
The callback from HWP, which had been specified in .ssh/authorized_keys, was commented out.
Since this was an emergency response, we first restored the original settings. Because the callback for the final move command was malfunctioning, we first performed a command-line callback from the HWP control PC to k1script0 to restore the callback that had been causing the permission error.
After that, we confirmed that HWP was responding correctly to commands from Guardian and performing the necessary processing.

Non-image files attached to this comment
VIS (SRM)
ryutaro.takahashi - 15:39 Monday 11 May 2026 (36870) Print this report
Implementation of hierarchical control for TM

I implemented the hierarchical control for TM with the Oplev. To feed back the DC signals to the IM pitch and yaw, the transfer functions from the IM actuator to the TM Oplev were measured (Ptot 1 and 2). The "DC" (FM9) in the OLDAMP FILTERS in the TM loop was switched off, and the "int" (FM10) in the OLDAMP FILTERS in the IM loop was switched on. The cross-frequency between the TM and IM loops is now below 0.1Hz (I couldn't edit the filter modules in Foton due to the file errors).

Images attached to this report
IOO (IMC)
takaaki.yokozawa - 7:55 Monday 11 May 2026 (36869) Print this report
IMC cannot be locked 260511
In this morning, I noticed that the IMC was not locked for a long time.
Previous locked loss 9th May. 02:12 (JST)

From my eye, the HWP PSL didn't move, could someone check it?
Images attached to this report
Comments to this report:
satoru.ikeda - 17:34 Monday 11 May 2026 (36871) Print this report
Error: Permission denied, please try again.
Permission denied, please try again.

A permission error occurred during the callback process from the HWP control PC to k1script.

k1script0
The callback from HWP, which had been specified in .ssh/authorized_keys, was commented out.
Since this was an emergency response, we first restored the original settings. Because the callback for the final move command was malfunctioning, we first performed a command-line callback from the HWP control PC to k1script0 to restore the callback that had been causing the permission error.
After that, we confirmed that HWP was responding correctly to commands from Guardian and performing the necessary processing.

Non-image files attached to this comment
DGS (General)
shoichi.oshino - 15:24 Saturday 09 May 2026 (36868) Print this report
Installed new network switch at Control Room
As reported in klog21826, the daisy-chained network configuration of the workstations (WS) located at the front of the control room has been improved.
However, in March, an issue was observed on k1mon3 where the displayed positions of the GigE cameras became unstable. This phenomenon has not reoccurred since then, suggesting that its correlation with network speed may be limited.
Even so, the control room displays a large number of ndscope channels along with camera images. To further improve the network environment, a new network switch has been installed. This switch is connected to the upstream network at 10 Gbps, and the available bandwidth is currently considered sufficient.
The switch is installed on the PC wagon of k1mo10. It is connected via an optical fiber cable laid from the computer room next to the control room.
For installation, k1ctr2 and k1ctr3 were temporarily shut down to ensure accessibility. For k1mon[0–4,10] and k1ctr[4–5], the LAN cables were switched without shutting down the workstations. After the work, connectivity with all these WS was confirmed.
To be safe, the previous LAN cables and network switch have been retained in case any issues arise. If no problems are observed after a period, removal of the old equipment will be carried out.
DGS (General)
takahiro.yamamoto - 21:29 Friday 08 May 2026 (36867) Print this report
Comment to Applying critical security patch (36851)
Same security patches were applied also to k1cam[0-2] and k1script0.

Though a mitigation measures for this issue were applied all affected DGS servers including intranet ones, a temporal solution is still used for k1gate and gwdet because vendor's patches for Red Hat like OS hadn't been released yet in last time. Today vendor's patches were released but there was no enough time to test and apply them because of HDD trouble on k1script0. So I will apply them to k1gate and gwdet in next time and will remove a temporal solution applied in klog#36851.
DGS (General)
takahiro.yamamoto - 21:20 Friday 08 May 2026 (36866) Print this report
Strange sound from the power supply unit
Hayakawa-san noticed strange sound from the power supply unit at the server room in the mine.
I could check that there's an intermittent loud noise today. (I couldn't tell what sounds. fan?)

This sounds came from the power supply unit of DC+18V for PRM/PR3 racks.
In the past, we had a trouble on the power supply unit of DC-18V for PRM/PR3 (see also klog#20372).
Situation is quite different from the past case, both two cases occurred on PRM/PR3.
(These two units located at the most warm area in the server room).
It seems better to investigate a cause of this issue and to consider the replacement of the power supply unit.

For replacing it, we need to check spare equipment at first and then, to stop all circuits in PR3 and PRM racks.
DGS (General)
satoru.ikeda - 19:02 Friday 08 May 2026 (36864) Print this report
Comment to Deployment of V2 IO-chassis and the front-end computer for EY0 (36692)

These are the results of verification conducted using the test bench at the SK Computer room.

K-Log#36698
>  Remaining concern
> Only remaining concern is that it takes a few hours for synchronizing IRIG-B when a cold boot of the front-end computer is performed. In the past, similar things often occurred when down time became so long and a room temperature changed largely maybe because it took long time to be stable in the temperature of a crystal oscillator of timing slave. On the other hand, in this time, it takes long time even when a down time is only a few seconds. Such a thing wasn't reproduced in the test bench. So we have no idea to solve this issue now.

The abnormal behavior of IRIGB_TIME could also be reproduced in the SK computer room.
We believe the cause lies in the IRIG-B card itself.
Using a test bench that operates normally, when the IRIG-B card (S/N: 3799) was installed, both decreases and increases in the value were observed.
Afterward, the issue still occurred even when replacing the system with the old IO chassis (V1 IO Chassis).
No issues have been observed with the following IRIG-B cards at SK computer room:
02181
3796
02174
3793
02158
02172
=> Tested in slot 2 with full-height configuration; all others are Low-profile
02171
=> Both the yellow and green LEDs light up, and the yellow LED does not turn off (possible connection failure on this card?)

Based on the above observations, we believe this issue is specific to individual IRIG-B cards.

Additional note: We need to consider that this issue is not caused by the IRIG-B card alone, but is occurring in combination with other factors.

Images attached to this comment
Non-image files attached to this comment
DGS (General)
takahiro.yamamoto - 18:15 Friday 08 May 2026 (36863) Print this report
Recovery from the fatal disk error on k1script0
Though I tried to apply package updates in klog#36851 to k1script0, apt command failed with fatal I/O error.
After investigation, I found files required to fix broken files and sectors were broken, and gave up to fix broken files and sectors on the original system disk.
Finally, I recovered the script server by using the backup disk taken in last month.

-----
Backup disk was slightly older than current configuration, so I applied all modification based on JGW-T2617213. After unifying the backup disk to the current nominal configuration, I confirmed all services on k1script0 were recovered. Then, security update I had planned to apply was applied to k1script0. Finally, a new backup disk of latest system disk was made.
DGS (General)
takahiro.yamamoto - 2:32 Friday 08 May 2026 (36862) Print this report
Comment to Test of a new guardian server (36860)
Loading check of guardian nodes were done.
Complete user code check hasn't been done because it requires to enter each GuardState one-by-one.
The GStreamer environment must be set up for a mode analysis by OMC_LSC guardian.
All guardian nodes should work fine with minor (but might be many) user code fixing after setting up the GStreamer environment.

Special Notes
  • python3-foton package was installed for loading kagralib.py. Because the automatic filter generation function by Guardian that Nakano-kun used is likely no longer in use, python3-foton may no longer be necessary.
  • Log-in configuration from the new guardian server to k1dc0 and TCam servers was set for SYS_DAQ and TCam guardians. Now DAQ kill and Taking TCam photos work fine on the new guardian server.
  • Thanks to the net-masquerade for PICO network by k1script, HWP in LSC_LOCK guardian can be used without any modification.
  • Mode analysis binary can be used also on the new server without any linker mismatch.
  • Taking GigE picture for mode analysis of OMC_TRANS haven't work yet due to missed some plugins of GStreamer. Set up of them on the new guardian server will be required.

VAC (SRM)
takashi.uchiyama - 14:09 Thursday 07 May 2026 (36861) Print this report
Comment to Vacuum leak test for SRM (36792)
2026/05/07

I opened GVsrm and closed the GV between TMP at SRM and the T-duct at 9:40. The vacuum pressure around SRM changed as follows,
(1) Open GVsrm: 4e-5Pa -> 3e-5Pa,
(2) Closed GV between TMP and T-dauct at SRM: 3e-5Pa -> 3hour -> 5.5e-5Pa.

Since the vacuum pressure became stable at 5.5e-5Pa, I stopped TMP.
Before stopping TMP, I set SRM and SR3 safe. After stopping TMP, I set SRM isolated and SR3 Lock_acquistion.
DGS (General)
takahiro.yamamoto - 21:08 Monday 04 May 2026 (36860) Print this report
Test of a new guardian server
A new guardian server was prepared as k1grd1 on the Debian13 system and now only SYS_SDF is running on this new server as an initial test. Detailed installation procedure can be found in JGW-T2617316.

Because the guardctrl, guardlog, and guardutils commands determine which Guardian server to connect to based on the GUARD_HOST, GUARDCTRL_HOST and GUARDLOG_HOST environment variables, we cannot currently view logs of SYS_SDF from the MEDM screen. Additionally, we cannot use guardctrl to perform start/stop/restart operations (not EXEC/PAUSE/STOP via MEDM screens) without changing these environment variables to "k1grd1". Since I don’t think we would perform the above operations including viewing logs on SYS_SDF when it is not under observing runs, it shouldn't be a problem. But, please be careful if such a situation arises.

Any other guardian nodes are still running on k1grd0 which is a current production server and haven't been tested yet. So the others can be operated as usual.
Comments to this report:
takahiro.yamamoto - 2:32 Friday 08 May 2026 (36862) Print this report
Loading check of guardian nodes were done.
Complete user code check hasn't been done because it requires to enter each GuardState one-by-one.
The GStreamer environment must be set up for a mode analysis by OMC_LSC guardian.
All guardian nodes should work fine with minor (but might be many) user code fixing after setting up the GStreamer environment.

Special Notes
  • python3-foton package was installed for loading kagralib.py. Because the automatic filter generation function by Guardian that Nakano-kun used is likely no longer in use, python3-foton may no longer be necessary.
  • Log-in configuration from the new guardian server to k1dc0 and TCam servers was set for SYS_DAQ and TCam guardians. Now DAQ kill and Taking TCam photos work fine on the new guardian server.
  • Thanks to the net-masquerade for PICO network by k1script, HWP in LSC_LOCK guardian can be used without any modification.
  • Mode analysis binary can be used also on the new server without any linker mismatch.
  • Taking GigE picture for mode analysis of OMC_TRANS haven't work yet due to missed some plugins of GStreamer. Set up of them on the new guardian server will be required.

DGS (General)
takahiro.yamamoto - 17:54 Monday 04 May 2026 (36859) Print this report
Comment to Applying critical security patch (36851)
When Uchiyama-san entered the mine for the VAC work, he rebooted k1ctr15 at IXV.
Then we can access k1ctr15 remotely, so I applied the same security patch and rebooted it.
Updates for all workstations (k1ctr0-21, 27, k1mon0-11, and k1naoj02-07) were completed.
VAC (SRM)
takashi.uchiyama - 14:13 Sunday 03 May 2026 (36858) Print this report
Comment to Vacuum leak test for SRM (36792)
2026/05/03

Tanikawa, Uchiyama

Around 12:40, we noticed a sudden pressure increase and then a decrease again.
Since we suspected that the ion pump had stopped unexpectedly, we entered KAGRA and checked the ion pump. At the KAGRA site, we found, however, that the ion pump and TMP worked normally.

We continue to see changes in the vacuum pressure in SRM.
Images attached to this comment
VAC (SRM)
takashi.uchiyama - 10:56 Sunday 03 May 2026 (36857) Print this report
Comment to Vacuum leak test for SRM (36792)
2026/05/03

Tanikawa, Uchiyama

We turned of the ion pump at SRM (#14) at 9:34. After that the vacuum pressure in SRM increased and reached 1E-3Pa at about 10:50.
Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 10:28 Sunday 03 May 2026 (36856) Print this report
Comment to Health check (36669)

I checked the TM transfer functions in a vacuum. The IP setpoints were reset to the original (-164 to -550 for L and -24 to -50 for T), so the Length Oplev was also aligned. The transfer functions were consistent with the references.

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 10:21 Sunday 03 May 2026 (36855) Print this report
Comment to Health check (36669)

I checked the IM transfer functions in a vacuum. They were the same as the recent situation in the vacuum (the DC gain is smaller than the reference, except for L and H1).

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 10:13 Sunday 03 May 2026 (36854) Print this report
Comment to Health check (36669)

I checked the GAS transfer functions in a vacuum. They were consistent with the references.

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 10:11 Sunday 03 May 2026 (36853) Print this report
Comment to Health check (36669)

I checked the IP transfer functions in a vacuum. IDAMP signals are not treated well, so please see the LVDT responses in BLEND signals. The resonant frequencies of the IP were 63mHz for L, 74mHz for T, and 0.32Hz for Y, respectively.

Images attached to this comment
FCL (Air)
shinji.miyoki - 9:10 Sunday 03 May 2026 (36852) Print this report
EYC temp adjustment

EYC1F temperature seemed to have decreased by 2 C degrees for 2 months.

I increased the heater power as follows,

  • Delonghi 0 -> 1.2kW
  • Sunrise 1 : 900W -> 1.2kW
  • Sunrise 2: 600W -> 1.2kW
DGS (General)
takahiro.yamamoto - 0:55 Sunday 03 May 2026 (36851) Print this report
Applying critical security patch
A critical security patch was applied to all Debian workstations except k1ctr15 (IXV).
k1ctr15 is now offline and I must enter the mine to recover it.
So recovering and applying to k1ctr15 will be done after holidays.

The corresponding patch has not been released yet for Red Hat-based operating systems.
So I rebooted gateway and SummaryPages servers with temporary kernel parameters to disable vulnerable kernel functions.
After releasing the patch and applying it, these servers will be rebooted again with eliminating temporary solution.
Comments to this report:
takahiro.yamamoto - 17:54 Monday 04 May 2026 (36859) Print this report
When Uchiyama-san entered the mine for the VAC work, he rebooted k1ctr15 at IXV.
Then we can access k1ctr15 remotely, so I applied the same security patch and rebooted it.
Updates for all workstations (k1ctr0-21, 27, k1mon0-11, and k1naoj02-07) were completed.
takahiro.yamamoto - 21:29 Friday 08 May 2026 (36867) Print this report
Same security patches were applied also to k1cam[0-2] and k1script0.

Though a mitigation measures for this issue were applied all affected DGS servers including intranet ones, a temporal solution is still used for k1gate and gwdet because vendor's patches for Red Hat like OS hadn't been released yet in last time. Today vendor's patches were released but there was no enough time to test and apply them because of HDD trouble on k1script0. So I will apply them to k1gate and gwdet in next time and will remove a temporal solution applied in klog#36851.
MIF (General)
shun.saito - 22:30 Saturday 02 May 2026 (36850) Print this report
Continuation of the PLL Attempt

[Tanaka, Hirose, Saito]

We confirmed that the Moku:Lab was operating properly. We also set up the system so that the local oscillator frequency could be continuously adjusted while monitoring the beat signal frequency. Although various types of filters were tested, the control system unfortunately did not function.
 

  • Following the previous attempt (klog:36845), we continued working on the PLL. First, to verify that the Moku:Lab was functioning correctly, we connected its output to its input and confirmed that a constant voltage applied at the output was observed unchanged at the input. Next, we confirmed that the error signal was properly output when passed through a flat (0 dB) filter.
    We then split the beat signal into two using a power splitter and used an additional Moku:Lab as a spectrum analyzer to continuously monitor the beat signal, allowing us to match the local oscillator frequency accordingly. Observing the mixed beat signal and local oscillator signal on the Moku:Lab oscilloscope, we saw that the amplitude of the error signal fluctuated and its frequency also varied (Photo 1). The red line represents the error signal, and the blue line represents the feedback signal. In this state, we applied various filters, including flat, high-pass, and low-pass filters, but no significant change in the error signal amplitude was observed. We also attempted to measure the open-loop transfer function; however, it did not appear that any meaningful measurement was obtained. Therefore, it is likely that feedback was not properly established. In addition, since the feedback signal amplitude was limited to a maximum of ±1 V, it may have been too small to achieve lock. It may be necessary to amplify the signal using an SR560 or a similar device before feeding it back to the laser PZT.
Images attached to this report
VIS (General)
takashi.uchiyama - 14:21 Saturday 02 May 2026 (36849) Print this report
VIS trip recovery (SR2, SR3)
2026/05/2

Tanikawa, Uchiyama, Ushiba

Ushiba-sama informed the shift members that some suspensions (SR2, SR3) were tripping. The shift members performed recovery process according to the manual.
Finally, SR2 and SR3 recovered to the LOCK_ACQUISITION state.
Images attached to this report
CAL (YPcal)
Misato Onishi - 16:30 Friday 01 May 2026 (36848) Print this report
New rack installation
Dan Chen, Misato Onishi, Seiya Matsuo

We installed a rack near the Tx module and placed the items that were originally located there into the rack.
We also plan to install the new YPcal laser in that rack.
At the end of the work, we confirmed that the YPcal state reached High Power state.
Images attached to this report
Search Help
×

Warning

×