Reports of 33872
FCL (Electricity)
shinji.miyoki - 23:49 Thursday 05 March 2026 (36505) Print this report
Comment to Electrical equipment inspection with blackout of KAGRA (36488)

I guess the malfunction of two Sunrise heaters is related to changes in thermal conditions after several years of continuous operation at ~1kW. This means that the inner circuits of heaters are baked at high temperature ( < ~60 °C: this is normal according to the manual) for several years. The circuits inside the heaters might have cracked during the cooling period of the power outage.

FCL (Electricity)
shinji.miyoki - 23:43 Thursday 05 March 2026 (36504) Print this report
Comment to Electrical equipment inspection with blackout of KAGRA (36488)

We tried to start the BS-C and SR-C Daikin coolers. However, we could not operate the BS-C Daikin cooler in the cooling mode, but only in the air flow mode. We started the IYC-C Daikin cooler, instead. We need a repair for the BS-C Daikin cooler.

 

DGS (General)
takahiro.yamamoto - 21:47 Thursday 05 March 2026 (36503) Print this report
Recovery of digital system
[Oshino Ikeda, Nakagaki, Sawada, YamaT and much assistance]

Digital system was recovered from the planned power outage.

The recovery of the end stations proceeded largely as scheduled, and at the corner station, thanks to much assistance, the power supply system restoration was completed significantly ahead of schedule. Thanks a lot.

-----

Issues


VIS-TMSX is unavailable
Though all real-time models are now online, VIS-TMSX cannot be used due to LVDT LO issue reported in klog#36501. VIS-TMSX guardian must be kept in SAFE state (there haven't been any preparation of VIS-TMS control yet, so it's not serious for now)

hyades-1 is still offline
hyades-1 couldn't be launched due to errors on the system disk. Now hyades-0 and hyades-2 are the product servers and hyades-1 which was one of the product server in the past is now just a backup server for emergency cases. So it's not serious. We need more investigation and some additional work for recovering it. Purchasing new disk might be necessary.

Because of the trouble on hyades-1, some servers which mounted the data storage of hyades-1 cannot be launched. I launched them as rescue mode and modified a mount configuration in /etc/fstab. After then, they came back online. Let's ensure to add "nofail" option for NFS mount if the mount point is not essential on that system.

Malfunction of the power supply unit of servers
On k1ioo0 and k1script1, one of the redundant power supply units was beeping. They were connected to the UPS via the AC power strip in the server rack. Because other healthy servers were also connected to the same AC power strip and input power of UPS was shutout by the breaker at the power distribution panel, it may not be related to the power outage itself. Anyway, they were recovered by replacing the power supply units to spare ones.
VAC (General)
takashi.uchiyama - 17:45 Thursday 05 March 2026 (36500) Print this report
Recovery of vaccum system after the blackout
2026/03/05

KImura, Yausi, SawadaH, Uchiyama

We performed recovery works for vacuum system after the blackout.
- We have not opened GVs yet except for GVpr3, GVsrm, GVtmsy.
- Closed GVs: GVifi, GVbsx, GVbsy, GVitmx, GVetmx, GVtmsx, GVitmy, GVetmy, GVommt
- We strted operation of all TMPs except for #10(PRM-PR3) and #26(at TMSY)
- Turned on vacuum gages and Raspberry pies.

We found the vacuum gauge at Xarm21 was out of order.
Some of the vacuum pressure values in the medm overview screen shows empty. We asked DGS members help us.-> finally solved.
Coluor of GVifi in the medm overview screen shows strange (should be red). We will check tomorrow.
Images attached to this report
Comments to this report:
shoichi.oshino - 17:37 Thursday 05 March 2026 (36502) Print this report
With Yamamoto-san's advice, I found that the cause was a failure to start the process on the Raspi.
After restarting the process, the value reading problem resumed.
VAC (General)
shoichi.oshino - 17:37 Thursday 05 March 2026 (36502) Print this report
Comment to Recovery of vaccum system after the blackout (36500)
With Yamamoto-san's advice, I found that the cause was a failure to start the process on the Raspi.
After restarting the process, the value reading problem resumed.
AOS (Beam Reducing Telescopes)
yoichi.aso - 17:19 Thursday 05 March 2026 (36501) Print this report
Signal generator for the TMSX LVDT is dead

During the recovery work from the blackout, I noticed that the signal generator used for the LO of the LVDTs of the TMSX-VIS was dead. 

When I turned on the generator, it showed "SYSTEM TEST FAILED 001" immediately.

The generator was turned off and the AC cable was removed before the blackout. Still, it is dead now. Probably it was near the end of its lifetime. R.I.P.

As the TMSX-VIS is not actively controlled at this moment, this will not cause any immediate problem.
Yet, we should replace the signal source asap.

Images attached to this report
VIS (General)
ryutaro.takahashi - 16:32 Thursday 05 March 2026 (36498) Print this report
Recovery of SGs for LVDT

I recovered the signal generators for the LVDTs from the power outage. Some commercial SGs have been initialized after the power is off. Their amplitude was set to be the same as the previous state.

PEM (Center)
tatsuki.washimi - 14:15 Thursday 05 March 2026 (36497) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Air environment during the power outage

the humidity was kept less than 60%

Images attached to this comment
CAL (Pcal general)
dan.chen - 13:35 Thursday 05 March 2026 (36496) Print this report
Comment to Shutdown of both Pcals for the planed power outage (36482)

With Shingo Hido

We recovered the both Pcals.

There was no issue.

PEM (Center)
tatsuki.washimi - 11:31 Thursday 05 March 2026 (36495) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Plots update

  • Change plot color: gradient, the center corresponds to 9:00  a.m.
  • Scale from Voltage to each physical unit
Images attached to this comment
FCL (Electricity)
takashi.uchiyama - 9:35 Thursday 05 March 2026 (36493) Print this report
Comment to Electrical equipment inspection with blackout of KAGRA (36488)
Aso, Miyoki, Hayakawa, Uchiyama

About the (5) issue.
We recovered the monitoring system of Xend by changing the AC adpter for the media converter in the morning on March 5, 2026.
PEM (Center)
takaaki.yokozawa - 8:45 Thursday 05 March 2026 (36492) Print this report
Comment to PEM injection test 260220 (36400)
Only for the red spectrum, I added the RMS values
Images attached to this comment
PEM (Center)
tatsuki.washimi - 0:13 Thursday 05 March 2026 (36491) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Quick resulrs

Images attached to this comment
FCL (Electricity)
takashi.uchiyama - 20:31 Wednesday 04 March 2026 (36489) Print this report
Comment to Electrical equipment inspection with blackout of KAGRA (36488)
Memo of the blackout

(1) The dehumidifier at the Mine entrance could not be operated after switching the generator. →Wiring error was found.

(2) The voltage of the generator was too high and the UPS of the PHS was malfunctioning → Lowering the output voltage of the generator to respond.

(3) Discovered burned parts in Yend cubicle →Reconfirmed by Miyoki, Kimura, and Furuyado.
Traces of burning were found on the control circuit board of the No. 1 conductor contact point in the capacitor bank for power factor improvement of 3 systems (No.1~No.3) installed in the Y-end receiving panel.

Countermeasures: Since the capacitor bank for power factor improvement(力率改善) of N0.1 and No.2 usually works, it was decided to replace the No.3 and No.1 conductor contacts for the time being and operate it with a capacitor bank for power factor improvement of the 2 systems. In addition, the operation of the conductor contact of No.3 was manually turned off. The No.1 conductor contact was taken back to the contractor factory and requested to replace the control circuit board and perform an operation test. After adjusting the schedule, it was decided to reinstall the No. 1 conductor contact that had completed the repair into the empty slot of the No. 3 conductor contact. In addition, it was confirmed that there was no need for a power outage when reinstalling.

We asked Nakai-denki company to check the number of operations of the capacitor bank at monthly inspection.

(4) The company members moved in the EYC booth and maybe in EYA. (I have not instructed that Do not walk in the clean booth. I am so sorry.)

(5) The monitoring system for fire alarms and so on, at the mine entrance and Xend, has not been recovered due to a broken media converter.
The monitoring system for the mine entrance has recovered by changing the AC adapter for the media converter.
We will try to recover the monitoring system for Xend tomorrow. Until the recover, entering the Xend area is prohibted.

(6) Two oil heters in EYC did not worked after the blackout.
Images attached to this comment
DetChar (General)
takahiro.yamamoto - 20:02 Wednesday 04 March 2026 (36490) Print this report
Comment to Updates of makeCache script (36476)
The new makeCache script is now running as a debug-mode on detchar@Kashiwa cluster.
After confirming that an output produced by the old and the new script is identical, the old script will be stop and migrated to the new script.

-----
The launched script is located as ~takahiro.yamamoto/.local/src/git/detchar/tools/Cache/Script/makeCache.sh which is one in ir1-upgrade branch.
Submission to HTCondor was done via condor-makeCache.sh in same directory, which creates a submission file automatically and can show or submit that submission file.
Cache files by this script are being served in /home/detchar/Desktop/cache/ as a behavior of the debug-mode.
FCL (Electricity)
takashi.uchiyama - 18:03 Wednesday 04 March 2026 (36488) Print this report
Electrical equipment inspection with blackout of KAGRA
2026/03/04

We performed an electrical equipment inspection with blackout of KAGRA on March 4th, 2026.

Timeline of the blackout is as follows,
8 AM, Morning meeting with company members and start preparation works before the black out.
10:12, Blackout.
13:21, Notification of power outage start at 13:30 from Hokuriku Electric Power company (Reconstruction work of the electric pole used by KAGRA, etc.)
16:19, Notification of completion of construction from Hokuriku Electric Power
16:56, Re-electricity. Start of recovery work.


Comments to this report:
takashi.uchiyama - 20:31 Wednesday 04 March 2026 (36489) Print this report
Memo of the blackout

(1) The dehumidifier at the Mine entrance could not be operated after switching the generator. →Wiring error was found.

(2) The voltage of the generator was too high and the UPS of the PHS was malfunctioning → Lowering the output voltage of the generator to respond.

(3) Discovered burned parts in Yend cubicle →Reconfirmed by Miyoki, Kimura, and Furuyado.
Traces of burning were found on the control circuit board of the No. 1 conductor contact point in the capacitor bank for power factor improvement of 3 systems (No.1~No.3) installed in the Y-end receiving panel.

Countermeasures: Since the capacitor bank for power factor improvement(力率改善) of N0.1 and No.2 usually works, it was decided to replace the No.3 and No.1 conductor contacts for the time being and operate it with a capacitor bank for power factor improvement of the 2 systems. In addition, the operation of the conductor contact of No.3 was manually turned off. The No.1 conductor contact was taken back to the contractor factory and requested to replace the control circuit board and perform an operation test. After adjusting the schedule, it was decided to reinstall the No. 1 conductor contact that had completed the repair into the empty slot of the No. 3 conductor contact. In addition, it was confirmed that there was no need for a power outage when reinstalling.

We asked Nakai-denki company to check the number of operations of the capacitor bank at monthly inspection.

(4) The company members moved in the EYC booth and maybe in EYA. (I have not instructed that Do not walk in the clean booth. I am so sorry.)

(5) The monitoring system for fire alarms and so on, at the mine entrance and Xend, has not been recovered due to a broken media converter.
The monitoring system for the mine entrance has recovered by changing the AC adapter for the media converter.
We will try to recover the monitoring system for Xend tomorrow. Until the recover, entering the Xend area is prohibted.

(6) Two oil heters in EYC did not worked after the blackout.
Images attached to this comment
takashi.uchiyama - 9:35 Thursday 05 March 2026 (36493) Print this report
Aso, Miyoki, Hayakawa, Uchiyama

About the (5) issue.
We recovered the monitoring system of Xend by changing the AC adpter for the media converter in the morning on March 5, 2026.
shinji.miyoki - 23:43 Thursday 05 March 2026 (36504) Print this report

We tried to start the BS-C and SR-C Daikin coolers. However, we could not operate the BS-C Daikin cooler in the cooling mode, but only in the air flow mode. We started the IYC-C Daikin cooler, instead. We need a repair for the BS-C Daikin cooler.

 

shinji.miyoki - 23:49 Thursday 05 March 2026 (36505) Print this report

I guess the malfunction of two Sunrise heaters is related to changes in thermal conditions after several years of continuous operation at ~1kW. This means that the inner circuits of heaters are baked at high temperature ( < ~60 °C: this is normal according to the manual) for several years. The circuits inside the heaters might have cracked during the cooling period of the power outage.

PEM (Center)
tatsuki.washimi - 20:33 Tuesday 03 March 2026 (36487) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Test data of this setup

Images attached to this comment
MIF (ITF Control)
kentaro.komori - 18:25 Tuesday 03 March 2026 (36486) Print this report
Comment to Trial of new DARM hierarchical control (36478)

During this measurement, we found an extremely large coupling between the MN length and the TM pitch.
Efforts to reduce this coupling will be necessary.
The coupling factor will be estimated when we restart this work.

DGS (General)
takahiro.yamamoto - 18:19 Tuesday 03 March 2026 (36485) Print this report
Shutdown of digital system in the mine
[Oshino, Ikeda, Nakagaki, Dan, Yuzu, YamaT]

We stopped the digital system in the mine for the planned power outage.

- DHCP in DGS network is now unavailable. So, please use ICRR or CATV network instead of DGS network.
- Files in /opt/rtcds/userapps are also unavailable. Because this situation makes a trouble on launching new terminal window, /opt directory was unmounted on k1ctr0-5.
- Files in /users are alive during the planned power outage.
DGS (General)
takahiro.yamamoto - 17:17 Tuesday 03 March 2026 (36484) Print this report
Change in the installed place of k1dc0
To mitigate an IPC glitch issue, the input of the DAQ stream on k1dc0 was duplicated using two Myricom network interface cards (klog#29110). At that time, new k1dc0 with two Myricom NIC was preparad as the new server chassis at a place in the rack for the real-time front-ends and old k1dc0 with one Myricom NIC was kept in the original location in the DAQ rack in order to recover quickly for the case of any trouble with two NIC configuration.

As a result, two NIC configuration worked well. A rate of IPC glitches were drastically reduced (klog#29571) and k1dc0 itself worked stably in recent two years. In this time, we thought an old configuration of k1dc0 as a backup was no longer necessary and decided to uninstall the old k1dc0 with 1 Myricom NIC. And also, the k1dc0 located in the RTFE rack was moved to the DAQ rack as a rearrangement of the lack layout. Change in the rack layout in this work is as follows.
- Uninstalled old k1dc0 with 1 NIC at U36-37 of B1 rack.
- Moved k1dc0 from U25-26 of C1 rack to U36-37 of B1 rack.
DGS (General)
takahiro.yamamoto - 16:50 Tuesday 03 March 2026 (36483) Print this report
Comment to Compatibility check of a2A5328-4gmPRO camera and pylon-camera-server (36390)

[Yuzu, YamaT]

a2A5328-4gmPRO camera was moved from a top plate of the fire alarm rack around OMC area to a top plate of k1boot in the server room (Fig.1).

It's now connected to the 10Gbps core switch directly. So a shortage of a bandwidth might be mitigated. (Even if so, we need to reconsider the system design of high resolution cameras. But we may be able to know one of requirements of that system from a change in the situation by difference in the connected switches.) Camera servers were already shut down for the planned power outage. So a test with the new configuration will be done after recovering from the power outage.
 

Images attached to this comment
CAL (Pcal general)
dan.chen - 16:37 Tuesday 03 March 2026 (36482) Print this report
Shutdown of both Pcals for the planed power outage

Today, we performed the shutdown operations for the Pcal at both end stations in response to the planned power outage.

Remote actions

First, we requested the SAFE state for the Pcal GRDs.
While the GRDs were transitioning to the SAFE state, and before turning off the laser output, we turned off the GRDs.
After confirming this, we turned off the laser output remotely.

Then, we shut down the following servers remotely:

  • calexa
  • caleyal
  • caleya

Note: calexal was not shut down remotely due to the issue described in klog36479.

On-site shutdown procedure (both ends)

  1. Turn off the AOM: power off the AOM driver.
  2. Turn off the laser source:
    1. Turn the key to switch the laser OFF.
    2. If an external voltage source is connected, turn the output OFF and power it down.
    3. Turn off the laser power supply.
  3. Turn off the shutter controller
  4. Turn off voltage sources (for shutters, laser source, AOM drivers).
  5. Turn off circuits power inside the rack under the Tx module.
  6. Shut down the servers (on-site).
  7. Turn off UPS units (two units).
  8. Unplug the power cords.

Note: For Pcal-Y, the QPD box was already powered off (no power supplied) before starting the shutdown procedure.

Comments to this report:
dan.chen - 13:35 Thursday 05 March 2026 (36496) Print this report

With Shingo Hido

We recovered the both Pcals.

There was no issue.

VIS (SRM)
ryutaro.takahashi - 15:06 Tuesday 03 March 2026 (36481) Print this report
Comment to Preparation for SRM installation (36327)

I went to the third reassembly of the broken FLDACCs. #5 folded pendulum block was replaced with #9 block (Picture #1). The assembled pendulum looks fine (Picture #2).

Images attached to this comment
DetChar (General)
hirotaka.yuzurihara - 13:05 Tuesday 03 March 2026 (36480) Print this report
Shutdown of detchar computers located in the mine

For the preparation of the planned power outage at the KAGRA site, I shutdowned the detchar computers located in the mine.

  • 10:06 shutdown of k1det0 and k1det1
    • At ~10:00 on March 5, the computers will be rebooted.
    • During that period, the automatic production of segment files at k1det1 is suspended. The automatic transfer of the segment files will restarted ~9:30 at March 6.
  • 11:23 shutdown of k1dettest
    • Note that the host name in the IP address list is k1detcl3. But the actual hostname was k1dettest.
CAL (XPcal)
dan.chen - 7:25 Tuesday 03 March 2026 (36479) Print this report
Laser ON/OFF issue on Pcal-X

I found that when I reboot/turn off the calexal server, the laser source was turned ON! (The LPD shows ~7V)

Today, we will use the local interlock key to keep the laser OFF on site and turn OFF the server.

But, this issue should be clear for safety.

Search Help
×

Warning

×