Reports of 33863
PEM (Center)
tatsuki.washimi - 11:31 Thursday 05 March 2026 (36495) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Plots update

  • Change plot color: gradient, the center corresponds to 9:00  a.m.
  • Scale from Voltage to each physical unit
Images attached to this comment
FCL (Electricity)
takashi.uchiyama - 9:35 Thursday 05 March 2026 (36493) Print this report
Comment to Electrical equipment inspection with blackout of KAGRA (36488)
Aso, Miyoki, Hayakawa, Uchiyama

About the (5) issue.
We recovered the monitoring system of Xend by changing the AC adpter for the media converter in the morning on March 5, 2026.
PEM (Center)
takaaki.yokozawa - 8:45 Thursday 05 March 2026 (36492) Print this report
Comment to PEM injection test 260220 (36400)
Only for the red spectrum, I added the RMS values
Images attached to this comment
PEM (Center)
tatsuki.washimi - 0:13 Thursday 05 March 2026 (36491) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Quick resulrs

Images attached to this comment
FCL (Electricity)
takashi.uchiyama - 20:31 Wednesday 04 March 2026 (36489) Print this report
Comment to Electrical equipment inspection with blackout of KAGRA (36488)
Memo of the blackout

(1) The dehumidifier at the Mine entrance could not be operated after switching the generator. →Wiring error was found.

(2) The voltage of the generator was too high and the UPS of the PHS was malfunctioning → Lowering the output voltage of the generator to respond.

(3) Discovered burned parts in Yend cubicle →Reconfirmed by Miyoki, Kimura, and Furuyado.
Traces of burning were found on the control circuit board of the No. 1 conductor contact point in the capacitor bank for power factor improvement of 3 systems (No.1~No.3) installed in the Y-end receiving panel.

Countermeasures: Since the capacitor bank for power factor improvement(力率改善) of N0.1 and No.2 usually works, it was decided to replace the No.3 and No.1 conductor contacts for the time being and operate it with a capacitor bank for power factor improvement of the 2 systems. In addition, the operation of the conductor contact of No.3 was manually turned off. The No.1 conductor contact was taken back to the contractor factory and requested to replace the control circuit board and perform an operation test. After adjusting the schedule, it was decided to reinstall the No. 1 conductor contact that had completed the repair into the empty slot of the No. 3 conductor contact. In addition, it was confirmed that there was no need for a power outage when reinstalling.

We asked Nakai-denki company to check the number of operations of the capacitor bank at monthly inspection.

(4) The company members moved in the EYC booth and maybe in EYA. (I have not instructed that Do not walk in the clean booth. I am so sorry.)

(5) The monitoring system for fire alarms and so on, at the mine entrance and Xend, has not been recovered due to a broken media converter.
The monitoring system for the mine entrance has recovered by changing the AC adapter for the media converter.
We will try to recover the monitoring system for Xend tomorrow. Until the recover, entering the Xend area is prohibted.

(6) Two oil heters in EYC did not worked after the blackout.
Images attached to this comment
DetChar (General)
takahiro.yamamoto - 20:02 Wednesday 04 March 2026 (36490) Print this report
Comment to Updates of makeCache script (36476)
The new makeCache script is now running as a debug-mode on detchar@Kashiwa cluster.
After confirming that an output produced by the old and the new script is identical, the old script will be stop and migrated to the new script.

-----
The launched script is located as ~takahiro.yamamoto/.local/src/git/detchar/tools/Cache/Script/makeCache.sh which is one in ir1-upgrade branch.
Submission to HTCondor was done via condor-makeCache.sh in same directory, which creates a submission file automatically and can show or submit that submission file.
Cache files by this script are being served in /home/detchar/Desktop/cache/ as a behavior of the debug-mode.
FCL (Electricity)
takashi.uchiyama - 18:03 Wednesday 04 March 2026 (36488) Print this report
Electrical equipment inspection with blackout of KAGRA
2026/03/04

We performed an electrical equipment inspection with blackout of KAGRA on March 4th, 2026.

Timeline of the blackout is as follows,
8 AM, Morning meeting with company members and start preparation works before the black out.
10:12, Blackout.
13:21, Notification of power outage start at 13:30 from Hokuriku Electric Power company (Reconstruction work of the electric pole used by KAGRA, etc.)
16:19, Notification of completion of construction from Hokuriku Electric Power
16:56, Re-electricity. Start of recovery work.


Comments to this report:
takashi.uchiyama - 20:31 Wednesday 04 March 2026 (36489) Print this report
Memo of the blackout

(1) The dehumidifier at the Mine entrance could not be operated after switching the generator. →Wiring error was found.

(2) The voltage of the generator was too high and the UPS of the PHS was malfunctioning → Lowering the output voltage of the generator to respond.

(3) Discovered burned parts in Yend cubicle →Reconfirmed by Miyoki, Kimura, and Furuyado.
Traces of burning were found on the control circuit board of the No. 1 conductor contact point in the capacitor bank for power factor improvement of 3 systems (No.1~No.3) installed in the Y-end receiving panel.

Countermeasures: Since the capacitor bank for power factor improvement(力率改善) of N0.1 and No.2 usually works, it was decided to replace the No.3 and No.1 conductor contacts for the time being and operate it with a capacitor bank for power factor improvement of the 2 systems. In addition, the operation of the conductor contact of No.3 was manually turned off. The No.1 conductor contact was taken back to the contractor factory and requested to replace the control circuit board and perform an operation test. After adjusting the schedule, it was decided to reinstall the No. 1 conductor contact that had completed the repair into the empty slot of the No. 3 conductor contact. In addition, it was confirmed that there was no need for a power outage when reinstalling.

We asked Nakai-denki company to check the number of operations of the capacitor bank at monthly inspection.

(4) The company members moved in the EYC booth and maybe in EYA. (I have not instructed that Do not walk in the clean booth. I am so sorry.)

(5) The monitoring system for fire alarms and so on, at the mine entrance and Xend, has not been recovered due to a broken media converter.
The monitoring system for the mine entrance has recovered by changing the AC adapter for the media converter.
We will try to recover the monitoring system for Xend tomorrow. Until the recover, entering the Xend area is prohibted.

(6) Two oil heters in EYC did not worked after the blackout.
Images attached to this comment
takashi.uchiyama - 9:35 Thursday 05 March 2026 (36493) Print this report
Aso, Miyoki, Hayakawa, Uchiyama

About the (5) issue.
We recovered the monitoring system of Xend by changing the AC adpter for the media converter in the morning on March 5, 2026.
PEM (Center)
tatsuki.washimi - 20:33 Tuesday 03 March 2026 (36487) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Test data of this setup

Images attached to this comment
MIF (ITF Control)
kentaro.komori - 18:25 Tuesday 03 March 2026 (36486) Print this report
Comment to Trial of new DARM hierarchical control (36478)

During this measurement, we found an extremely large coupling between the MN length and the TM pitch.
Efforts to reduce this coupling will be necessary.
The coupling factor will be estimated when we restart this work.

DGS (General)
takahiro.yamamoto - 18:19 Tuesday 03 March 2026 (36485) Print this report
Shutdown of digital system in the mine
[Oshino, Ikeda, Nakagaki, Dan, Yuzu, YamaT]

We stopped the digital system in the mine for the planned power outage.

- DHCP in DGS network is now unavailable. So, please use ICRR or CATV network instead of DGS network.
- Files in /opt/rtcds/userapps are also unavailable. Because this situation makes a trouble on launching new terminal window, /opt directory was unmounted on k1ctr0-5.
- Files in /users are alive during the planned power outage.
DGS (General)
takahiro.yamamoto - 17:17 Tuesday 03 March 2026 (36484) Print this report
Change in the installed place of k1dc0
To mitigate an IPC glitch issue, the input of the DAQ stream on k1dc0 was duplicated using two Myricom network interface cards (klog#29110). At that time, new k1dc0 with two Myricom NIC was preparad as the new server chassis at a place in the rack for the real-time front-ends and old k1dc0 with one Myricom NIC was kept in the original location in the DAQ rack in order to recover quickly for the case of any trouble with two NIC configuration.

As a result, two NIC configuration worked well. A rate of IPC glitches were drastically reduced (klog#29571) and k1dc0 itself worked stably in recent two years. In this time, we thought an old configuration of k1dc0 as a backup was no longer necessary and decided to uninstall the old k1dc0 with 1 Myricom NIC. And also, the k1dc0 located in the RTFE rack was moved to the DAQ rack as a rearrangement of the lack layout. Change in the rack layout in this work is as follows.
- Uninstalled old k1dc0 with 1 NIC at U36-37 of B1 rack.
- Moved k1dc0 from U25-26 of C1 rack to U36-37 of B1 rack.
DGS (General)
takahiro.yamamoto - 16:50 Tuesday 03 March 2026 (36483) Print this report
Comment to Compatibility check of a2A5328-4gmPRO camera and pylon-camera-server (36390)

[Yuzu, YamaT]

a2A5328-4gmPRO camera was moved from a top plate of the fire alarm rack around OMC area to a top plate of k1boot in the server room (Fig.1).

It's now connected to the 10Gbps core switch directly. So a shortage of a bandwidth might be mitigated. (Even if so, we need to reconsider the system design of high resolution cameras. But we may be able to know one of requirements of that system from a change in the situation by difference in the connected switches.) Camera servers were already shut down for the planned power outage. So a test with the new configuration will be done after recovering from the power outage.
 

Images attached to this comment
CAL (Pcal general)
dan.chen - 16:37 Tuesday 03 March 2026 (36482) Print this report
Shutdown of both Pcals for the planed power outage

Today, we performed the shutdown operations for the Pcal at both end stations in response to the planned power outage.

Remote actions

First, we requested the SAFE state for the Pcal GRDs.
While the GRDs were transitioning to the SAFE state, and before turning off the laser output, we turned off the GRDs.
After confirming this, we turned off the laser output remotely.

Then, we shut down the following servers remotely:

  • calexa
  • caleyal
  • caleya

Note: calexal was not shut down remotely due to the issue described in klog36479.

On-site shutdown procedure (both ends)

  1. Turn off the AOM: power off the AOM driver.
  2. Turn off the laser source:
    1. Turn the key to switch the laser OFF.
    2. If an external voltage source is connected, turn the output OFF and power it down.
    3. Turn off the laser power supply.
  3. Turn off the shutter controller
  4. Turn off voltage sources (for shutters, laser source, AOM drivers).
  5. Turn off circuits power inside the rack under the Tx module.
  6. Shut down the servers (on-site).
  7. Turn off UPS units (two units).
  8. Unplug the power cords.

Note: For Pcal-Y, the QPD box was already powered off (no power supplied) before starting the shutdown procedure.

VIS (SRM)
ryutaro.takahashi - 15:06 Tuesday 03 March 2026 (36481) Print this report
Comment to Preparation for SRM installation (36327)

I went to the third reassembly of the broken FLDACCs. #5 folded pendulum block was replaced with #9 block (Picture #1). The assembled pendulum looks fine (Picture #2).

Images attached to this comment
DetChar (General)
hirotaka.yuzurihara - 13:05 Tuesday 03 March 2026 (36480) Print this report
Shutdown of detchar computers located in the mine

For the preparation of the planned power outage at the KAGRA site, I shutdowned the detchar computers located in the mine.

  • 10:06 shutdown of k1det0 and k1det1
    • At ~10:00 on March 5, the computers will be rebooted.
    • During that period, the automatic production of segment files at k1det1 is suspended. The automatic transfer of the segment files will restarted ~9:30 at March 6.
  • 11:23 shutdown of k1dettest
    • Note that the host name in the IP address list is k1detcl3. But the actual hostname was k1dettest.
CAL (XPcal)
dan.chen - 7:25 Tuesday 03 March 2026 (36479) Print this report
Laser ON/OFF issue on Pcal-X

I found that when I reboot/turn off the calexal server, the laser source was turned ON! (The LPD shows ~7V)

Today, we will use the local interlock key to keep the laser OFF on site and turn OFF the server.

But, this issue should be clear for safety.

MIF (ITF Control)
kentaro.komori - 1:05 Tuesday 03 March 2026 (36478) Print this report
Trial of new DARM hierarchical control

[Tanaka, Ushiba, Aso, Komori]

Abstract:

We began testing a new DARM hierarchical control scheme using only the MN and TM stages to suppress the RMS of the TM feedback signal and enable control with the low-power coil driver.
The filter tested in this trial caused high-frequency saturation in the ALS DARM state; therefore, further investigation and redesign are required.

Detail:

As described in a series of previous trials (klog:33176), we need to implement a new DARM hierarchical control scheme in which the RMS of the TM feedback signal is sufficiently reduced to allow replacement of the current high-power coil driver with a low-power one.
This replacement is crucial because the noise of the high-power coil driver is already subdominant in O4c, and we must transition to the low-power driver to further improve the detector sensitivity.

We revisited the previous trials and adopted a new approach in this study.
Our new approach uses only the MN and TM stages, as this configuration avoids the IM stage, whose actuation path includes a negative zero.

We designed a new open-loop transfer function using the MN and TM stages based on a theoretical multiple-pendulum model, as shown in Fig. 1.
The crossover frequency between the MN and TM stages is approximately 4 Hz.
Please note that the unity gain frequency (UGF) can be increased up to approximately 30 Hz at most, because the ALS DARM error signal is noisy and excessive bandwidth leads to saturation.

We successfully implemented the hierarchical transfer function as designed (Fig. 2).
However, we were unable to close the ALS DARM loop with this filter due to saturation at the MN stage.

The RMS of the current DARM feedback signal is dominated by noise around 30–40 Hz, as expected (Fig. 3).
The phase compensation filter, whose gain increases at higher frequencies, enhances the response in this band and leads to saturation around 30–40 Hz.

We will redesign the filter to mitigate this saturation in the near future.
In addition, another potential approach is to switch from the high-power coil driver with the conventional filter to the low-power driver with the new filter during the transition of the controlled TM between EX and EY after the handover to IR.
Since the IR signal is significantly quieter, we can maintain the UGF at around 100 Hz from the beginning.
This will make the implementation of the new filter more straightforward.

Images attached to this report
Comments to this report:
kentaro.komori - 18:25 Tuesday 03 March 2026 (36486) Print this report

During this measurement, we found an extremely large coupling between the MN length and the TM pitch.
Efforts to reduce this coupling will be necessary.
The coupling factor will be estimated when we restart this work.

PEM (Center)
tatsuki.washimi - 17:57 Monday 02 March 2026 (36471) Print this report
Comment to Test the Magneticfield Measurements for the Power Outage (36393)

(Log for Friday's work)

To investigate the line noises and the low-frequency 1/f noise, whether they were actual magnetic field or sensing noise, I located the sensors at opposite direction (Y-direction) and checked their coherence and phase.
The spectrum shape is lmost the same for both channels.
There were large coherence (~1) for both the line noises and the low-frequency 1/f noise, and the phase was about 180 degree. So we can say these noises are actual magnetic field.

 

After these tests, I recoverd the sensor positions and directions to the original ones. (Hx for the X-direction, Hy for the Y-direction)

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 17:41 Monday 02 March 2026 (36477) Print this report
Comment to Preparation for SRM installation (36327)

The FLDACCs for SRM moved back to the SR-OMC area from the BS area. I went to the second reassembly of the broken FLDACCs. #2 folded pendulum block was replaced with #8 block (Picture #1). The assembled pendulum looks fine (Picture #2).

Images attached to this comment
DetChar (General)
takahiro.yamamoto - 15:46 Monday 02 March 2026 (36476) Print this report
Updates of makeCache script
As a part of migration work to the new script server prepared in klog#36430, I updated makeCache script and moved it from k1script1 to k1script0.

-----
Details of update
makeCache script had a duplication issue such as klog#34595 and we sometimes needed to fix cache files by manually. For future observing runs, I rewrote the makeCache script to remove this issue. And also, scatterred scripts for each frame type such as full, science, second, etc. were merged to one new script. Old makeCache.sh which can provide only full frames cache was renamed as makeCache_FullFrame.sh and the new script was added as makeCache.sh.

These changes was made in the origin/ir1-upgrade branch and haven't been merged to the origin/master branch. And also, it was deployed at Kamioka by switching a branch of /users/DET/tools. Now the old scripts on k1script1 was stopped and the new script is launched on k1script0. I haven't prepared a condor submission file, so the old scripts in the origin/master branch are still being used at Kashiwa. After preparing the condor submission file, we can use it at Kashiwa and merge origin/ir1-upgrade branch to the origin/master branch.

By the way, I added a 'O4c' tag to a current HEAD of the origin/master branch as a snapshot of softwares used during O4c.

A change in the mountpoint of DET servers
Some DET servers still mounted the hyades-1 disk and used the hyades-0 disk as a primary one. But hyades-1 has ready retired and is currently on standby as a backup server. So I chagend mountpoint from the hyades-1 disk to the hyades-2 disk and the hyades-2 disk is set as the primary disk because it can store files during longer term than the hyades-0 disk.
Comments to this report:
takahiro.yamamoto - 20:02 Wednesday 04 March 2026 (36490) Print this report
The new makeCache script is now running as a debug-mode on detchar@Kashiwa cluster.
After confirming that an output produced by the old and the new script is identical, the old script will be stop and migrated to the new script.

-----
The launched script is located as ~takahiro.yamamoto/.local/src/git/detchar/tools/Cache/Script/makeCache.sh which is one in ir1-upgrade branch.
Submission to HTCondor was done via condor-makeCache.sh in same directory, which creates a submission file automatically and can show or submit that submission file.
Cache files by this script are being served in /home/detchar/Desktop/cache/ as a behavior of the debug-mode.
PEM (General)
takafumi.ushiba - 13:49 Monday 02 March 2026 (36475) Print this report
Comment to Modify the k1pemmanage model : add the snow monitor channel (35971)

Since newly added channels are not monitored by SDF, I started to monitor these channels as shown in fig1.

Images attached to this comment
MIF (ASC)
dan.chen - 13:46 Monday 02 March 2026 (36474) Print this report
ASC SDF accept

With Kenta Tanaka

We accepted SDFs caused by the works on klog36194.

 

Images attached to this report
VIS (General)
ryutaro.takahashi - 13:25 Monday 02 March 2026 (36473) Print this report
SDF check for VIS

[Takahashi, Ikeda]

We checked if the SDFs for VIS are 0. We attached MON flags to the SDFs that changed due to the model modification in SRM.

Images attached to this report
IOO (IMC)
takafumi.ushiba - 11:05 Monday 02 March 2026 (36472) Print this report
SDF accept for laser temp bias

SDF of K1:IMC-SERVO_NPRO_TEMP_BIAS_OFFSET was changed from 5 of January.
Though laser work was conducted at that time (klog36019), there is no description on the laser temp bias changes and according to the working log, there seems no need to change temp bias.
So, I'm not so sure this change is related to this laser work.

Anyway, we confirmed IFO can be locked with the current EPICS values, I accepted the SDF shown in fig1.

Images attached to this report
FCL (Inflastructure)
shinji.miyoki - 20:39 Sunday 01 March 2026 (36470) Print this report
SR-OMC area cleaning day3

[Miyoki]

The cleaning was done.

The water dropping above -X side of SRM continues. We put a wavy plastic plate and dust cloth to catch the water dropping. So we should be careful when  the top curtain will be opened.

Images attached to this report
Search Help
×

Warning

×