We turned on the OBS INTENT around 14:37 JST.
IFO guardian request state was somehow changed to READY around the noon yesterday, so we requested OBSERVING state to IFO guardian.
Since IFO guardian state should be changed according with the other guardian state and channel values, please do not change IFO guardian state manually.
After clearing SDF differences, I checked various changes since 9pm yesterday.
There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).
Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.
I accepted the following differences of SDFs again. Although the screenshot was not recorded, the accepted or reverted channels are indicated by a red box in the attached figure.
Channels in JGW-L2314962
It's related to klog#34536
They were updated based on the latest value of optical gain of DARM and ETMX actuator efficiencies in klog#34533.
Changes were accepted on observation.snap (Fig.1), down.snap (Fig.2), and safe.snap (Fig.3).
Finally, numerical rounding errors were reverted after re-loading observation.snap as shown in Fig.4.
I updated line tracking parameters (JGW-L2314962).
All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).
Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.
I updated line tracking parameters (JGW-L2314962).
All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).
Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.
After clearing SDF differences, I checked various changes since 9pm yesterday.
There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).
Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.
We accepted the SDFs reported on klog#34530.
CALEX, CALEY
K1:CAL-PCAL_{EX,EY}_TCAM_{MAIN,PATH1,PATH2}_{X,Y}
A CAL Tcam session was performed to obtain beam position information necessary for Pcal. The parameters have already been updated, and SDF has alreadly been accepted.
Operator: DanChen, ShingoHido
Update Time: 2025/07/11 18:06:51
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EX_TCAM_PATH1_X | 3.20691 mm | 3.26692 mm | +0.06001 mm |
K1:CAL-PCAL_EX_TCAM_PATH1_Y | 62.78539 mm | 62.67463 mm | -0.11076 mm |
K1:CAL-PCAL_EX_TCAM_PATH2_X | -0.21958 mm | -0.12804 mm | +0.09154 mm |
K1:CAL-PCAL_EX_TCAM_PATH2_Y | -63.36743 mm | -63.39445 mm | -0.02702 mm |
Update Time: 2025/07/11 18:07:16
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EX_TCAM_MAIN_X | 3.62390 mm | 3.38665 mm | -0.23725 mm |
K1:CAL-PCAL_EX_TCAM_MAIN_Y | 11.89945 mm | 12.36073 mm | +0.46128 mm |
Update Time: 2025/07/11 18:07:39
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EY_TCAM_PATH1_X | 1.27965 mm | 1.51586 mm | +0.23621 mm |
K1:CAL-PCAL_EY_TCAM_PATH1_Y | 63.48267 mm | 63.81734 mm | +0.33467 mm |
K1:CAL-PCAL_EY_TCAM_PATH2_X | -0.34771 mm | -0.45437 mm | -0.10666 mm |
K1:CAL-PCAL_EY_TCAM_PATH2_Y | -71.06132 mm | -70.58075 mm | +0.48057 mm |
Update Time: 2025/07/11 18:07:56
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EY_TCAM_MAIN_X | 8.78471 mm | 8.41675 mm | -0.36796 mm |
K1:CAL-PCAL_EY_TCAM_MAIN_Y | -3.91830 mm | -4.09518 mm | -0.17689 mm |
[Yokozawa, Yuzurihara]
We performed the lockloss investigation for the recent lockloss of 2025-07-10 10:53:19.187500 UTC. The previous lockloss investigation was posted in klog34259. This is the longest lock in the O4c by now.
Although I check all the lockloss phenomena which were reported in the past klog, those phenomena except for the OMC DCPD saturation did not occur just before this lock loss.
The OMC saturation occurred just before the lockloss. But, the quickness of the saturation seems to be different from the past saturations. I and Yokozawa-san checked the timeseires and listed up the possible cause of the saturation.
At this lockloss, we can see the OMC saturation, which we reported many times in the past klog. This will be the direct cause of the lockloss. I and Yokozawa-san checked the original cause of the OMC saturation.
One important difference between this saturation and past saturation is the quickness to reach the saturation. As seen in Figure9 and Figure10, the DCPD signal reached the saturation within 1 ms, which is so quick!
We are thinking it's difficult to make this quick saturation by the suspension. So, we guess it might be associated with the electrical signal or glitch.
In the past saturation, we could see the several oscillation just before reaching the saturation as shown in Figure 11 or this. This might be a hint to investigate the cause more.
Anyway, this was new phenomenon for us (or I missed this phenomenon...).
Date: 2025/7/11
I performed this work as a weekly work.
Attached figure is a screen-shot just after the work.
[KImura and M. Takahashi]
At around 10:20 a.m. on July 11, during a routine patrol of Y-end, the occurrence of a strange noise was confirmed near Y-27.
The source of the noise was the cooling water unit for the TMP of the Y-27 vacuum pump, and the cause was a lack of cooling water.
Therefore, the GV of the Y-27 vacuum pump was closed and the TMP was stopped.
At approximately 1:40 p.m., the cooling water unit was refilled with water to put back into service the Y-27 vacuum pump, and the unit was put back into operation.
As a result of the operation, there were no problems with the TMP or the cooling water unit, and the unit was in regular operation.
As a precaution, we decided to leave the repair equipment at Y-27 and check the situation during the next scheduled patrol.
[Kimura and H.Sawada]
Two turbomolecular pumps were delivered.
These turbo molecular pumps were transferred to the front room of the parking lot and are temporarily stored next to the lift truck.
[Nakagaki-san, Ikeda]
Summary:
This work is related to K-Log#34519: Offload of F1 GAS.
Due to a communication failure with the network switch located in the SRM VIS mini-rack, communication with the SRM GAS Stepper Motor was lost.
To recover, we restarted the SRM network switch by unplugging and reconnecting the LAN cable on the OMC side, which supplies PoE to the switch.
Details:
We received a report from R. Takahashi-san that the SRM GAS Stepper Motor was not functioning.
Upon investigation, we found the following:
The relay of the Stepper Motor showed slight response.
There was no ping response from k1script1 to the LAN-serial converter.
The SRM VIS mini-rack network switch did respond to pings from k1script1.
However, the web interface of the network switch was unresponsive.
Based on this, we suspected a communication failure within the SRM network switch.
Since the SRM network switch uses PoE, its power can be controlled from the OMC side. However, we were unable to locate the port list for remote control.
Therefore, after consulting in the control room, we decided to enter the tunnel for manual intervention.
14:13 - Entered the center and reconnected the LAN cable from the OMC network switch to the SRM.
14:15 - Confirmed at the BS area workstation that the SRM GAS Stepper Motor could be turned ON via BIO and scripts.
14:18 - Handed over control to Takahashi-san.
14:19 - Exited the central area.
OMC Network Switch Port Assignments:
Pico Network
01: PD mini rack switch
02: OMC mini rack switch
03: SR3 mini rack switch
04: SR2 mini rack switch
05: SRM mini rack switch
06: BS mini rack switch
07: PR2 mini rack switch
08: Green X Table switch
09: Green Y Table switch
10: AS Table switch
11: Precision Air Processor SRX1
12: AS_WFS HWP
DGS Network
18: OMC Workstation
[Kimura, M.Takahashi, H.Sawada and Yamaguchi]
We transported the experimental equipments at the parking lot in the mine to inside the prefab house at the Kohguci.
The equipments will be temporarily stored in the prefab house until July 15.
I accepted the following differences of SDFs (see related klog34517, klog34518, and klog34519). SDF #25-27 were reverted because of misoperation.
[Takahashi, Ikeda]
I offloaded the F1 GAS with the FR. The stepper motor initially did not work due to a network switch problem, which was recovered at the site by Ikeda-san.
I offloaded the SF GAS with the FR, which reached the maximum limit (1360000 steps).
I offloaded the F0 GAS with the FR.