Reports of 33856
PEM (Center)
tatsuki.washimi - 20:33 Tuesday 03 March 2026 (36487) Print this report
Comment to PEM measurements @ OMC area during the power outage 2026 (36441)

Test data of this setup

Images attached to this comment
MIF (ITF Control)
kentaro.komori - 18:25 Tuesday 03 March 2026 (36486) Print this report
Comment to Trial of new DARM hierarchical control (36478)

During this measurement, we found an extremely large coupling between the MN length and the TM pitch.
Efforts to reduce this coupling will be necessary.
The coupling factor will be estimated when we restart this work.

DGS (General)
takahiro.yamamoto - 18:19 Tuesday 03 March 2026 (36485) Print this report
Shutdown of digital system in the mine
[Oshino, Ikeda, Nakagaki, Dan, Yuzu, YamaT]

We stopped the digital system in the mine for the planned power outage.

- DHCP in DGS network is now unavailable. So, please use ICRR or CATV network instead of DGS network.
- Files in /opt/rtcds/userapps are also unavailable. Because this situation makes a trouble on launching new terminal window, /opt directory was unmounted on k1ctr0-5.
- Files in /users are alive during the planned power outage.
DGS (General)
takahiro.yamamoto - 17:17 Tuesday 03 March 2026 (36484) Print this report
Change in the installed place of k1dc0
To mitigate an IPC glitch issue, the input of the DAQ stream on k1dc0 was duplicated using two Myricom network interface cards (klog#29110). At that time, new k1dc0 with two Myricom NIC was preparad as the new server chassis at a place in the rack for the real-time front-ends and old k1dc0 with one Myricom NIC was kept in the original location in the DAQ rack in order to recover quickly for the case of any trouble with two NIC configuration.

As a result, two NIC configuration worked well. A rate of IPC glitches were drastically reduced (klog#29571) and k1dc0 itself worked stably in recent two years. In this time, we thought an old configuration of k1dc0 as a backup was no longer necessary and decided to uninstall the old k1dc0 with 1 Myricom NIC. And also, the k1dc0 located in the RTFE rack was moved to the DAQ rack as a rearrangement of the lack layout. Change in the rack layout in this work is as follows.
- Uninstalled old k1dc0 with 1 NIC at U36-37 of B1 rack.
- Moved k1dc0 from U25-26 of C1 rack to U36-37 of B1 rack.
DGS (General)
takahiro.yamamoto - 16:50 Tuesday 03 March 2026 (36483) Print this report
Comment to Compatibility check of a2A5328-4gmPRO camera and pylon-camera-server (36390)

[Yuzu, YamaT]

a2A5328-4gmPRO camera was moved from a top plate of the fire alarm rack around OMC area to a top plate of k1boot in the server room (Fig.1).

It's now connected to the 10Gbps core switch directly. So a shortage of a bandwidth might be mitigated. (Even if so, we need to reconsider the system design of high resolution cameras. But we may be able to know one of requirements of that system from a change in the situation by difference in the connected switches.) Camera servers were already shut down for the planned power outage. So a test with the new configuration will be done after recovering from the power outage.
 

Images attached to this comment
CAL (Pcal general)
dan.chen - 16:37 Tuesday 03 March 2026 (36482) Print this report
Shutdown of both Pcals for the planed power outage

Today, we performed the shutdown operations for the Pcal at both end stations in response to the planned power outage.

Remote actions

First, we requested the SAFE state for the Pcal GRDs.
While the GRDs were transitioning to the SAFE state, and before turning off the laser output, we turned off the GRDs.
After confirming this, we turned off the laser output remotely.

Then, we shut down the following servers remotely:

  • calexa
  • caleyal
  • caleya

Note: calexal was not shut down remotely due to the issue described in klog36479.

On-site shutdown procedure (both ends)

  1. Turn off the AOM: power off the AOM driver.
  2. Turn off the laser source:
    1. Turn the key to switch the laser OFF.
    2. If an external voltage source is connected, turn the output OFF and power it down.
    3. Turn off the laser power supply.
  3. Turn off the shutter controller
  4. Turn off voltage sources (for shutters, laser source, AOM drivers).
  5. Turn off circuits power inside the rack under the Tx module.
  6. Shut down the servers (on-site).
  7. Turn off UPS units (two units).
  8. Unplug the power cords.

Note: For Pcal-Y, the QPD box was already powered off (no power supplied) before starting the shutdown procedure.

VIS (SRM)
ryutaro.takahashi - 15:06 Tuesday 03 March 2026 (36481) Print this report
Comment to Preparation for SRM installation (36327)

I went to the third reassembly of the broken FLDACCs. #5 folded pendulum block was replaced with #9 block (Picture #1). The assembled pendulum looks fine (Picture #2).

Images attached to this comment
DetChar (General)
hirotaka.yuzurihara - 13:05 Tuesday 03 March 2026 (36480) Print this report
Shutdown of detchar computers located in the mine

For the preparation of the planned power outage at the KAGRA site, I shutdowned the detchar computers located in the mine.

  • 10:06 shutdown of k1det0 and k1det1
    • At ~10:00 on March 5, the computers will be rebooted.
    • During that period, the automatic production of segment files at k1det1 is suspended. The automatic transfer of the segment files will restarted ~9:30 at March 6.
  • 11:23 shutdown of k1dettest
    • Note that the host name in the IP address list is k1detcl3. But the actual hostname was k1dettest.
CAL (XPcal)
dan.chen - 7:25 Tuesday 03 March 2026 (36479) Print this report
Laser ON/OFF issue on Pcal-X

I found that when I reboot/turn off the calexal server, the laser source was turned ON! (The LPD shows ~7V)

Today, we will use the local interlock key to keep the laser OFF on site and turn OFF the server.

But, this issue should be clear for safety.

MIF (ITF Control)
kentaro.komori - 1:05 Tuesday 03 March 2026 (36478) Print this report
Trial of new DARM hierarchical control

[Tanaka, Ushiba, Aso, Komori]

Abstract:

We began testing a new DARM hierarchical control scheme using only the MN and TM stages to suppress the RMS of the TM feedback signal and enable control with the low-power coil driver.
The filter tested in this trial caused high-frequency saturation in the ALS DARM state; therefore, further investigation and redesign are required.

Detail:

As described in a series of previous trials (klog:33176), we need to implement a new DARM hierarchical control scheme in which the RMS of the TM feedback signal is sufficiently reduced to allow replacement of the current high-power coil driver with a low-power one.
This replacement is crucial because the noise of the high-power coil driver is already subdominant in O4c, and we must transition to the low-power driver to further improve the detector sensitivity.

We revisited the previous trials and adopted a new approach in this study.
Our new approach uses only the MN and TM stages, as this configuration avoids the IM stage, whose actuation path includes a negative zero.

We designed a new open-loop transfer function using the MN and TM stages based on a theoretical multiple-pendulum model, as shown in Fig. 1.
The crossover frequency between the MN and TM stages is approximately 4 Hz.
Please note that the unity gain frequency (UGF) can be increased up to approximately 30 Hz at most, because the ALS DARM error signal is noisy and excessive bandwidth leads to saturation.

We successfully implemented the hierarchical transfer function as designed (Fig. 2).
However, we were unable to close the ALS DARM loop with this filter due to saturation at the MN stage.

The RMS of the current DARM feedback signal is dominated by noise around 30–40 Hz, as expected (Fig. 3).
The phase compensation filter, whose gain increases at higher frequencies, enhances the response in this band and leads to saturation around 30–40 Hz.

We will redesign the filter to mitigate this saturation in the near future.
In addition, another potential approach is to switch from the high-power coil driver with the conventional filter to the low-power driver with the new filter during the transition of the controlled TM between EX and EY after the handover to IR.
Since the IR signal is significantly quieter, we can maintain the UGF at around 100 Hz from the beginning.
This will make the implementation of the new filter more straightforward.

Images attached to this report
Comments to this report:
kentaro.komori - 18:25 Tuesday 03 March 2026 (36486) Print this report

During this measurement, we found an extremely large coupling between the MN length and the TM pitch.
Efforts to reduce this coupling will be necessary.
The coupling factor will be estimated when we restart this work.

PEM (Center)
tatsuki.washimi - 17:57 Monday 02 March 2026 (36471) Print this report
Comment to Test the Magneticfield Measurements for the Power Outage (36393)

(Log for Friday's work)

To investigate the line noises and the low-frequency 1/f noise, whether they were actual magnetic field or sensing noise, I located the sensors at opposite direction (Y-direction) and checked their coherence and phase.
The spectrum shape is lmost the same for both channels.
There were large coherence (~1) for both the line noises and the low-frequency 1/f noise, and the phase was about 180 degree. So we can say these noises are actual magnetic field.

 

After these tests, I recoverd the sensor positions and directions to the original ones. (Hx for the X-direction, Hy for the Y-direction)

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 17:41 Monday 02 March 2026 (36477) Print this report
Comment to Preparation for SRM installation (36327)

The FLDACCs for SRM moved back to the SR-OMC area from the BS area. I went to the second reassembly of the broken FLDACCs. #2 folded pendulum block was replaced with #8 block (Picture #1). The assembled pendulum looks fine (Picture #2).

Images attached to this comment
DetChar (General)
takahiro.yamamoto - 15:46 Monday 02 March 2026 (36476) Print this report
Updates of makeCache script
As a part of migration work to the new script server prepared in klog#36430, I updated makeCache script and moved it from k1script1 to k1script0.

-----
Details of update
makeCache script had a duplication issue such as klog#34595 and we sometimes needed to fix cache files by manually. For future observing runs, I rewrote the makeCache script to remove this issue. And also, scatterred scripts for each frame type such as full, science, second, etc. were merged to one new script. Old makeCache.sh which can provide only full frames cache was renamed as makeCache_FullFrame.sh and the new script was added as makeCache.sh.

These changes was made in the origin/ir1-upgrade branch and haven't been merged to the origin/master branch. And also, it was deployed at Kamioka by switching a branch of /users/DET/tools. Now the old scripts on k1script1 was stopped and the new script is launched on k1script0. I haven't prepared a condor submission file, so the old scripts in the origin/master branch are still being used at Kashiwa. After preparing the condor submission file, we can use it at Kashiwa and merge origin/ir1-upgrade branch to the origin/master branch.

By the way, I added a 'O4c' tag to a current HEAD of the origin/master branch as a snapshot of softwares used during O4c.

A change in the mountpoint of DET servers
Some DET servers still mounted the hyades-1 disk and used the hyades-0 disk as a primary one. But hyades-1 has ready retired and is currently on standby as a backup server. So I chagend mountpoint from the hyades-1 disk to the hyades-2 disk and the hyades-2 disk is set as the primary disk because it can store files during longer term than the hyades-0 disk.
PEM (General)
takafumi.ushiba - 13:49 Monday 02 March 2026 (36475) Print this report
Comment to Modify the k1pemmanage model : add the snow monitor channel (35971)

Since newly added channels are not monitored by SDF, I started to monitor these channels as shown in fig1.

Images attached to this comment
MIF (ASC)
dan.chen - 13:46 Monday 02 March 2026 (36474) Print this report
ASC SDF accept

With Kenta Tanaka

We accepted SDFs caused by the works on klog36194.

 

Images attached to this report
VIS (General)
ryutaro.takahashi - 13:25 Monday 02 March 2026 (36473) Print this report
SDF check for VIS

[Takahashi, Ikeda]

We checked if the SDFs for VIS are 0. We attached MON flags to the SDFs that changed due to the model modification in SRM.

Images attached to this report
IOO (IMC)
takafumi.ushiba - 11:05 Monday 02 March 2026 (36472) Print this report
SDF accept for laser temp bias

SDF of K1:IMC-SERVO_NPRO_TEMP_BIAS_OFFSET was changed from 5 of January.
Though laser work was conducted at that time (klog36019), there is no description on the laser temp bias changes and according to the working log, there seems no need to change temp bias.
So, I'm not so sure this change is related to this laser work.

Anyway, we confirmed IFO can be locked with the current EPICS values, I accepted the SDF shown in fig1.

Images attached to this report
FCL (Inflastructure)
shinji.miyoki - 20:39 Sunday 01 March 2026 (36470) Print this report
SR-OMC area cleaning day3

[Miyoki]

The cleaning was done.

The water dropping above -X side of SRM continues. We put a wavy plastic plate and dust cloth to catch the water dropping. So we should be careful when  the top curtain will be opened.

Images attached to this report
DGS (General)
takahiro.yamamoto - 15:44 Sunday 01 March 2026 (36469) Print this report
Comment to Compatibility check of a2A5328-4gmPRO camera and pylon-camera-server (36390)

I tried using the a2A5328-4gmPRO camera while it was occupying k1cam1.
But high-resolution and/or high-bit-depth recording still failed.
So using dedicated camera server doesn't become a solution for this issue
And also, either an upgrade of the network capacity or an allocation of dedicated bandwidth seems to be necessary.

-----
Bottleneck investigation
To isolate a network capacity issue from a server capacity issue, I moved all acA640-120gm cameras to k1cam0 and k1cam2, then tested a2A5328-4gmPRO while k1cam1 exclusively occupied by it. But capturing high-resolution and/or high-bit-depth images was still failed. Increasing buffer size of NIC from 256kB to 4096kB (upper limit) by ethtool didn't change a situation. In this configuration, a25328-4gmPRO should be able to occupy 1Gbps band-width on the NIC of k1cam1. And also, this situation isn't probably improved by using 10GigE NIC for k1cam1 because NIC of a25328-4gmPRO is 1GbE.

These facts suggest that a current limitation is actual throughput of a network path which can be occupied by a2A5328-4gmPRO. (I haven't isolated just a bandwidth issue from buffer size issue of each network switch.) Current connection of the CAM network is shown in Fig.1. Though the core camera switch in the server room on which all traffic concentrates is 10GigE one, edge switches except at IOO and connections are 1GbE. (Because all cameras in the corner station were connected from IOO rack in the past, only the edge switch at IOO is 10GigE one.) acA640-120gm which is main cameras for KAGRA GigE system is used with 640x480 as resolution, 8bit depth (Mono8) and 25fps. So ~60Mbps per 1 camera is required and 1Gbps bandwidth is enough to stably use 12-13 cameras (corresponding to ~70% of full bandwidth) and managing 18 cameras with 2 camera servers and one 10GigE core switch is sufficient.

a2A5328-4gmPRO is designed as using almost full width of 1Gbps to send caputured images (It's natural behavior because if bandwidth is limited as slower than 1Gbps, a risk of a connection time-out increases for high-resolution cameras such as a2A5328-4gmPRO.) So sharing bandwidth of 1GbE with another devices is not proper configuration for a2A5328-4gmPRO. If network paths will be shared, the CAM network should be upgraded to 10GigE for at least the shared paths with a2A5328-4gmPRO. Another solution is connecting a2A5328-4gmPRO to the core switch in the server room directly. But it may be more tough work than preparing a dedicated TCam server physically near a2A5328-4gmPRO.

Simple camera server application for a2A5328-4gmPRO
To capture images easily for this investigation, I prepared a simple camera server application as /kagra/camera/test-a2A5328-4gmPRO/test_CameraServer.py. It runs on one of camera servers (currently k1cam1) and listens various requests from CDS-workstations via the client application as /kagra/camera/test-a2A5328-4gmPRO/test_CameraClient.py. Because this server application keeps a connection to camera devices (keeping camera.Open()) same as camlan- and pylon-camera-server, this server application must be stopped before another applications connect to same camera devices. This application is now managed by the systemd in the user slice. So it can be stopped by a following command on k1cam1.
> systemctl --user stop test_CameraServer.service

Images attached to this comment
FCL (Inflastructure)
shinji.miyoki - 18:36 Saturday 28 February 2026 (36468) Print this report
Comment to SR-OMC area cleaning day2 (36467)

2 FFUS were recoverd above SRM by reconnecting power/line cables near FFUs. 1FFU above SR3 and OMMT was still off maybe because of power/signal line disconnection.

Wrapped stainless steal floors near SR2 and SRM were repaired.

Images attached to this comment
FCL (Inflastructure)
shinji.miyoki - 14:00 Saturday 28 February 2026 (36467) Print this report
SR-OMC area cleaning day2

[Miyoki]

The Fujimi-sangyou member found a huge water pool on the ceiling just above the -X side of the SRM vacuum tanks. (Maybe Hayakawa-kun also noticed well before) (Photo.1)

Although the water drain pipe was set just around this pool, there seems to be no water flow in this drain pipe, maybe because something is stuck at some point. So we tried to drain this water along some possible paths by pushing up the water pool with several T-bars. Fortunately, about 70% water could be drained out. (Photo.2) We need to fix this drain system while the amount of water is small. 

Images attached to this report
Comments to this report:
shinji.miyoki - 18:36 Saturday 28 February 2026 (36468) Print this report

2 FFUS were recoverd above SRM by reconnecting power/line cables near FFUs. 1FFU above SR3 and OMMT was still off maybe because of power/signal line disconnection.

Wrapped stainless steal floors near SR2 and SRM were repaired.

Images attached to this comment
PEM (Center)
takaaki.yokozawa - 9:27 Saturday 28 February 2026 (36466) Print this report
Comment to PEM injection test 260220 (36400)
Sorry I missed the method of the calibration using the diaggui.
Transfer function of the OMC stack (V->V) can be found in JGWDoc17204

Fig.1. showed the unit of the (m/s^2) with the transfer function of the OMC stack (V->V)
Fig.2. showed the unit of the (m/s), multiplying 1/f with the transfer function of the OMC stack (V->V)
Fig.3. showed the unit of the (m), multiplying 1/f^2 with the transfer function of the OMC stack (V->V)
Images attached to this comment
FCL (Inflastructure)
shinji.miyoki - 8:34 Saturday 28 February 2026 (36465) Print this report
SR-OMC area cleaning day1

[Hayakawa, Uchiyama, Takahashi-m, Sawada, Yamaguchi, Takahashi-r]

Fujimi-sangyo started cleaning of the SR-OMC area. 

Images attached to this report
DGS (General)
takahiro.yamamoto - 0:08 Saturday 28 February 2026 (36464) Print this report
Taking system backup of camera servers
Backup of camera servers
Camera servers were upgraded from camlan camera-server on CentOS7 to pylon-camera-server on Debian12 (klog#36182 for k1cam2 and klog#36284 for k1cam0). Although OMC_TRANS was kept on k1cam1 as camlan camera-server due to the compatibility issue between the pylon-camera-server and the implementation of OMC_LOCK guardian, it was finally solved by the update of OMC_LSC guardian (klog#36311). Now all cameras can be migrated the new system, so I took a snapshot of the current system disk of camera servers.

Frozen console issue
After taking a backup of the system disk, I noticed k1cam2 couldn't reach the login console. I though at first the system disk was broken during the backup process. But according to the console logs, grub was alive and fsck at the beginning of booting up was completed. And also, SSH login was also available. So I could check the detailed system logs and found that tty1 couldn't be refreshed due to the compatibility between on-board graphics and ATEN KVM (When I tested it in Mozumi, I used console. But I always worked via SSH after installing in the mine. So I haven't noticed it.). So I tuned the kernel parameters to restore the console. Detailed parameters will be added to the installation manual. Tuned parameters were applied also for the backup disk.

Upgrade of k1cam1
And also, k1cam1 was migrated to the pylon-camera-server system. It's just a backup resources to compensate a shortage of CPU power of k1cam0 until the hardware replacement of k1cam0 in the next FY. After k1cam1 is retired, we can make a space to move the guardian server currently located in the Mozumi building, which was a long-standing task for DGS.
DGS (General)
takahiro.yamamoto - 21:51 Friday 27 February 2026 (36463) Print this report
Comment to An update test of client workstations to Debian13 (36448)
The LIGO CDS team was kind enough to work for the rapid release of the revised version.
The new release is now available as v1.5.3.
It worked well on the Debian13 test stand as the Guardian client.
A test of the Guardian server will be also started soon.
Search Help
×

Warning

×