Reports of 31982
OBS (Summary)
koji.nakagaki - 17:18 Saturday 12 July 2025 (34541) Print this report
Operation shift summary

Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: No issues were found.
Compressor: No issues were found.

IFO state (JST):

09:00 Due to the work done the previous day, 
  OBS INTENT was "OFF", INTERFEROMETER STATUS was "CALIB NOT READY", and LENGTH SENSING CONTROL was "OBSERVATION".

   (Klog #34533) (Klog #34539)

14:37 Ushiba-san turns "ON" OBS INTENT and is now in "OBSERVING" status. (Klog #34540)
17:00 This shift was finished. The status was "OBSERVING".

OBS (General)
takahiro.yamamoto - 15:46 Saturday 12 July 2025 (34539) Print this report
Lesson for future maintenance and commissioning works
Because of multiple human errors, we couldn't back to OBSERVING for a while.
To prevent repeated same mistakes, I summarized what was bad and what must be improved.
Because partial commissioning will be resumed, it'll become a good lesson for many people who will commit commissioning activities.

Related klogs
- Trigger work: klog#34520
- Calibration works: klog#34530, klog#34533
- Initial notice: klog#34536
- Clearing work: klog#34537
- Confirmation: klog#34538
- Back to Obs.: klog#34540

points to be improved
1) SDF clearing work was done NOT in OBSERVATION on LSC_LOCK guardian.
=> We must ensure that all set point values in observation.snap are accepted as ones in OBSERVATION state.

2) A procedure of loading down.snap/observation.snap cycle was skipped in the clearing work.
=> A loading cycle is necessary to mitigate accidental SDF issues.
Though a main purpose of this procedure is to avoid the SDF issues due to numerical rounding errors, it can also prevent a mistakes such as above.

3) A calibration measurement was started with insufficient checks.
=> Calibration measurement must be started after confirming that CALIB_NOT_READY or READY on IFO guardian is reachable.
Though a problem this time didn't affect calibration measurement fortunately, there was some possibility to be required a re-measurement in the worst case.
If sufficient checks were done, we was probably able to find a problem in ~2hrs. earlier.

4) Request state of IFO guardian was changed in mistake (though it's not directly related to the issue in this time).
=> A request to the interferometer must be done via LSC_LOCK guardian.
There is no work to require the manual operation of IFO guardian except for a bug fix of IFO guardian itself.
OBS (General)
takafumi.ushiba - 14:46 Saturday 12 July 2025 (34540) Print this report
Comment to Set observing bit (34099)

We turned on the OBS INTENT around 14:37 JST.
IFO guardian request state was somehow changed to READY around the noon yesterday, so we requested OBSERVING state to IFO guardian.

Since IFO guardian state should be changed according  with the other guardian state and channel values, please do not change IFO guardian state manually.

CAL (General)
takahiro.yamamoto - 14:19 Saturday 12 July 2025 (34538) Print this report
Comment to Weekly calibration on 7/11 (34533)

After clearing SDF differences, I checked various changes since 9pm yesterday.

There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).

Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.

Images attached to this comment
OBS (SDF)
ryutaro.takahashi - 13:50 Saturday 12 July 2025 (34537) Print this report
Comment to Changes of observation.snap during O4c (34169)

I accepted the following differences of SDFs again. Although the screenshot was not recorded, the accepted or reverted channels are indicated by a red box in the attached figure.

Images attached to this comment
OBS (SDF)
takahiro.yamamoto - 20:27 Friday 11 July 2025 (34532) Print this report
Comment to Changes of observation.snap during O4c (34169)

K1CALCS

Channels in JGW-L2314962

It's related to klog#34536
They were updated based on the latest value of optical gain of DARM and ETMX actuator efficiencies in klog#34533.

Changes were accepted on observation.snap (Fig.1), down.snap (Fig.2), and safe.snap (Fig.3).
Finally, numerical rounding errors were reverted after re-loading observation.snap as shown in Fig.4.

Images attached to this comment
CAL (General)
takahiro.yamamoto - 20:27 Friday 11 July 2025 (34536) Print this report
Comment to Weekly calibration on 7/11 (34533)

I updated line tracking parameters (JGW-L2314962).

All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).

Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.

Images attached to this comment
OBS (SDF)
dan.chen - 20:10 Friday 11 July 2025 (34534) Print this report
Comment to Changes of observation.snap during O4c (34169)
We accepted SDFs related to the cal measurement in observation.snap, and safe.snap (k1calcs).
K1:CAL-MEAS_{CURRENT, LATEST}
Images attached to this comment
CAL (General)
dan.chen - 19:57 Friday 11 July 2025 (34533) Print this report
Weekly calibration on 7/11
CAL group
We did the calibration measurements and updated the parameters.

Estimated parameters in the Pre-maintenance measurements are as follows.
 H_etmxtm = 3.842050490e-14 @10Hz ( 0.06% from previous measurements)
 H_etmxim = 1.538272700e-14 @10Hz  ( 3.65% from previous measurements)
 Optical_gain = 2.080308644e+12   ( -2.17% from previous measurements)
 Cavity_pole = 18.275875408 Hz  ( 1.30% from previous measurements)

Previous values are listed in klog#34466.

Estimated parameters in the Post-maintenance measurements are as follows.
 H_etmxtm = 3.857020608e-14 @10Hz ( 0.39% from pre-maintenance measurements)
 H_etmxim = 1.603791210e-14 @10Hz  ( 4.26% from pre-maintenance measurements)
 Optical_gain = 2.075302086e+12   ( -0.24% from pre-maintenance measurements)
 Cavity_pole = 18.126316036 Hz  ( -0.82% from pre-maintenance measurements)

Fig.3 and Fig.4 show the fitting results.
Fig.1 and Fig.2 show the ratio of the sensing functions
Images attached to this report
Comments to this report:
takahiro.yamamoto - 20:27 Friday 11 July 2025 (34536) Print this report

I updated line tracking parameters (JGW-L2314962).

All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).

Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.

Images attached to this comment
takahiro.yamamoto - 14:19 Saturday 12 July 2025 (34538) Print this report

After clearing SDF differences, I checked various changes since 9pm yesterday.

There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).

Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.

Images attached to this comment
OBS (SDF)
dan.chen - 18:17 Friday 11 July 2025 (34531) Print this report
Comment to Changes of observation.snap during O4c (34169)

We accepted the SDFs reported on klog#34530.

CALEX, CALEY
K1:CAL-PCAL_{EX,EY}_TCAM_{MAIN,PATH1,PATH2}_{X,Y}

Images attached to this comment
CAL (General)
dan.chen - 18:16 Friday 11 July 2025 (34530) Print this report
K1:VIS-ETMX_GOOD_OPLEV_YAW

A CAL Tcam session was performed to obtain beam position information necessary for Pcal. The parameters have already been updated, and SDF has alreadly been accepted.

Operator: DanChen, ShingoHido

Update Time: 2025/07/11 18:06:51

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EX_TCAM_PATH1_X 3.20691 mm 3.26692 mm +0.06001 mm
K1:CAL-PCAL_EX_TCAM_PATH1_Y 62.78539 mm 62.67463 mm -0.11076 mm
K1:CAL-PCAL_EX_TCAM_PATH2_X -0.21958 mm -0.12804 mm +0.09154 mm
K1:CAL-PCAL_EX_TCAM_PATH2_Y -63.36743 mm -63.39445 mm -0.02702 mm

Update Time: 2025/07/11 18:07:16

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EX_TCAM_MAIN_X 3.62390 mm 3.38665 mm -0.23725 mm
K1:CAL-PCAL_EX_TCAM_MAIN_Y 11.89945 mm 12.36073 mm +0.46128 mm

Update Time: 2025/07/11 18:07:39

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EY_TCAM_PATH1_X 1.27965 mm 1.51586 mm +0.23621 mm
K1:CAL-PCAL_EY_TCAM_PATH1_Y 63.48267 mm 63.81734 mm +0.33467 mm
K1:CAL-PCAL_EY_TCAM_PATH2_X -0.34771 mm -0.45437 mm -0.10666 mm
K1:CAL-PCAL_EY_TCAM_PATH2_Y -71.06132 mm -70.58075 mm +0.48057 mm

Update Time: 2025/07/11 18:07:56

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EY_TCAM_MAIN_X 8.78471 mm 8.41675 mm -0.36796 mm
K1:CAL-PCAL_EY_TCAM_MAIN_Y -3.91830 mm -4.09518 mm -0.17689 mm

 

MIF (General)
hirotaka.yuzurihara - 17:19 Friday 11 July 2025 (34527) Print this report
lockloss investigation during O4c (2025-07-10 10:53:19.187500 UTC)

[Yokozawa, Yuzurihara]

We performed the lockloss investigation for the recent lockloss of 2025-07-10 10:53:19.187500 UTC. The previous lockloss investigation was posted in klog34259. This is the longest lock in the O4c by now.

Quick summary 

Although I check all the lockloss phenomena which were reported in the past klog, those phenomena except for the OMC DCPD saturation did not occur just before this lock loss.
The OMC saturation occurred just before the lockloss. But, the quickness of the saturation seems to be different from the past saturations. I and Yokozawa-san checked the timeseires and listed up the possible cause of the saturation. 

Details

  • There was no excess on seismic motion of 1~10 Hz. (Figure 1)
  • About the oscillation, we can see the oscillation on the PR2 pitch with 0.83 Hz. But, the amplitude is not large. (Figure2)
    • We can see the coincident oscillation (0.83 Hz) on the ASC PRC2 pitch and ASC MICH pitch. (Figure3)
  • There was small excess on PRM feedback signal. But, it will be small to cause the lockloss. (Figure4)
  • About the drift, there was no drift on Xarm and Yarm mirrors over 30 minutes. (Xarm, Yarm)
  • About the BPC control, there was no strange behavior just before the lockloss
  • About the saturation of control, there was no saturation on BS and ETMX.
  • There was no glitch on ETMY MN oplev.
  • There was no earthquake around the lockloss time.
    • We can see the large seismic motion in 3~10 Hz and 10~30 Hz. This excess ended 20 minutes later. But, this amplitude is not enough to cause the lockloss. (Figure)

Quick OMC saturation

At this lockloss, we can see the OMC saturation, which we reported many times in the past klog. This will be the direct cause of the lockloss. I and Yokozawa-san checked the original cause of the OMC saturation.
One important difference between this saturation and past saturation is the quickness to reach the saturation. As seen in Figure9 and Figure10, the DCPD signal reached the saturation within 1 ms, which is so quick!
We are thinking it's difficult to make this quick saturation by the suspension. So, we guess it might be associated with the electrical signal or glitch.

In the past saturation, we could see the several oscillation just before reaching the saturation as shown in Figure 11 or this. This might be a hint to investigate the cause more.
Anyway, this was new phenomenon for us (or I missed this phenomenon...).

Other findings

  • The volage monitor channel
    • While K1:PEM-VOLD_OMC_RACK_DC_M18_OUT_DQ is stable, K1:PEM-VOLD_OMC_RACK_DC_P18_OUT_DQ shows the drift behavior and it reset somehow. (Figure 12)
    • K1:PEM-VOLT_AS_TABLE_GND_OUT_DQ is almost stable but shows the unstable behavior at -5 hour and just before the lockloss.
    • It's highly important to check the ground condition around AS port and OMC rack.
  • PMC control
    • K1:PSL-PMC_LO_POWER_MON_OUT16 shows some jumps before the lockloss. It would be very helpful if the PMC expert could comment on this phenomenon.
Images attached to this report
OBS (Summary)
shoichi.oshino - 17:07 Friday 11 July 2025 (34529) Print this report
Operation shift summary
Operators name: Tomaru, Oshino
Shift time: 9-17 (JST)
Check Items:

VAC: No issues were found.
CRY cooler: No issues were found.
Compressor: No issues were found.

IFO state (JST):
09:00 The shift was started. The first half of the calibration work.
Maintenace work
17:00 This shift was finished. The last half of the calibration work.
FCL (Electricity)
masakazu.aoumi - 16:30 Friday 11 July 2025 (34528) Print this report
Monthly inspection of electric equipments
Monthly inspection of electric equipments
With Nakai-denki san,Shinko-denki san
13:00 In
16:00 Out
DGS (General)
takahiro.yamamoto - 16:23 Friday 11 July 2025 (34526) Print this report
Comment to Removing old trend frames (34315)
Jul. 11th

I removed old second frames on the disk storage in the mine.
Removed segment is [1432000000, 1434000000).
VIS (EX)
dan.chen - 16:17 Friday 11 July 2025 (34525) Print this report
Comment to ETMX IP Offload (34460)

Date: 2025/7/11

I performed this work as a weekly work.

Attached figure is a screen-shot just after the work.

Images attached to this comment
VAC (Tube Y)
nobuhiro.kimura - 16:02 Friday 11 July 2025 (34524) Print this report
Refill of cooling water for Y-27 pump unit

[KImura and M. Takahashi]

 At around 10:20 a.m. on July 11, during a routine patrol of Y-end, the occurrence of a strange noise was confirmed near Y-27.
The source of the noise was the cooling water unit for the TMP of the Y-27 vacuum pump, and the cause was a lack of cooling water.
Therefore, the GV of the Y-27 vacuum pump was closed and the TMP was stopped.

 At approximately 1:40 p.m., the cooling water unit was refilled with water to put back into service the Y-27 vacuum pump, and the unit was put back into operation.
As a result of the operation, there were no problems with the TMP or the cooling water unit, and the unit was in regular operation.
As a precaution, we decided to leave the repair equipment at Y-27 and check the situation during the next scheduled patrol.

 

Images attached to this report
VAC (Valves & Pumps)
nobuhiro.kimura - 15:53 Friday 11 July 2025 (34523) Print this report
Delivery of deliverables (TMP)

[Kimura and H.Sawada]

 Two turbomolecular pumps were delivered.
These turbo molecular pumps were transferred to the front room of the parking lot and are temporarily stored next to the lift truck.

Images attached to this report
DGS (General)
satoru.ikeda - 15:50 Friday 11 July 2025 (34522) Print this report
SRM GAS Stepper Motor Unavailable

[Nakagaki-san, Ikeda]

Summary:
This work is related to K-Log#34519: Offload of F1 GAS.

Due to a communication failure with the network switch located in the SRM VIS mini-rack, communication with the SRM GAS Stepper Motor was lost.
To recover, we restarted the SRM network switch by unplugging and reconnecting the LAN cable on the OMC side, which supplies PoE to the switch.

Details:
We received a report from R. Takahashi-san that the SRM GAS Stepper Motor was not functioning.

Upon investigation, we found the following:

The relay of the Stepper Motor showed slight response.
There was no ping response from k1script1 to the LAN-serial converter.
The SRM VIS mini-rack network switch did respond to pings from k1script1.
However, the web interface of the network switch was unresponsive.
Based on this, we suspected a communication failure within the SRM network switch.

Since the SRM network switch uses PoE, its power can be controlled from the OMC side. However, we were unable to locate the port list for remote control.

Therefore, after consulting in the control room, we decided to enter the tunnel for manual intervention.

14:13 - Entered the center and reconnected the LAN cable from the OMC network switch to the SRM.
14:15 - Confirmed at the BS area workstation that the SRM GAS Stepper Motor could be turned ON via BIO and scripts.
14:18 - Handed over control to Takahashi-san.
14:19 - Exited the central area.

OMC Network Switch Port Assignments:
Pico Network
01: PD mini rack switch  
02: OMC mini rack switch  
03: SR3 mini rack switch  
04: SR2 mini rack switch  
05: SRM mini rack switch  
06: BS mini rack switch  
07: PR2 mini rack switch  
08: Green X Table switch  
09: Green Y Table switch  
10: AS Table switch  
11: Precision Air Processor SRX1
12: AS_WFS HWP  
DGS Network
18: OMC Workstation  
 

Non-image files attached to this report
CRY (General)
nobuhiro.kimura - 15:46 Friday 11 July 2025 (34521) Print this report
Comment to Transportation of experimental equipments for Kashiwa (34309)

[Kimura, M.Takahashi, H.Sawada and Yamaguchi]

 We transported the experimental equipments at the parking lot in the mine to inside the prefab house at the Kohguci.

The equipments will be temporarily stored in the prefab house until July 15.

Images attached to this comment
OBS (SDF)
ryutaro.takahashi - 14:58 Friday 11 July 2025 (34520) Print this report
Comment to Changes of observation.snap during O4c (34169)

I accepted the following differences of SDFs (see related klog34517, klog34518, and klog34519). SDF #25-27 were reverted because of misoperation.

Images attached to this comment
VIS (SRM)
ryutaro.takahashi - 14:49 Friday 11 July 2025 (34519) Print this report
Offload of F1 GAS

[Takahashi, Ikeda]

I offloaded the F1 GAS with the FR. The stepper motor initially did not work due to a network switch problem, which was recovered at the site by Ikeda-san.

VIS (PR3)
ryutaro.takahashi - 14:17 Friday 11 July 2025 (34518) Print this report
Offload of SF GAS

I offloaded the SF GAS with the FR, which reached the maximum limit (1360000 steps).

VIS (IY)
ryutaro.takahashi - 14:13 Friday 11 July 2025 (34517) Print this report
Offload of F0 GAS

I offloaded the F0 GAS with the FR.

DGS (General)
shoichi.oshino - 13:58 Friday 11 July 2025 (34516) Print this report
Exchange k1tw0 SSD
[Nakagaki, Oshino]

We exchanged k1tw0 SSD for the new one.
The data recorded on the previous SSD is currently being copied to the NFS server.
The data from the last six months is being temporarily loaded from an external disk. We plan to switch to a NAS storage space next week.
Search Help
×

Warning

×