Recently, (from Sunday) the peak around 110Hz is large enough. Do we need beam position adjustment?
Recently, (from Sunday) the peak around 110Hz is large enough. Do we need beam position adjustment?
I checked the OPLEV sum for TM and MN for the last~ one 1 month.
TM OPLEV seems to have ~ day-level oscillation, with a half-day level small oscillation. Is it related with tidal motion? The total value seems to decrease gradually.
MN OPLEV seems to have ~ week level oscillation.
Operator's name: Kimura, Takahashi
Shift time: 9-17 (JST)
Check Items:
VAC: No issues were found.
CRY cooler:
10:00 We found a 10K difference in 4K_REF3_50K_HEAD/BAR (Fig.1).
Kimura-san checked Q-Mass (Fig.2).
16:00 Temperature drift became small (Fig.3).
Kimura-san checked Q-Mass (Fig.4).
Compressor: No issues were found.
IFO state (JST):
09:00 The shift started. The status was "OBSERVING".
10:10 Lock lost.
10:34 "OBSERVING"
13:38 Lock lost.
14:42 "OBSERVING"
14:57 Lock lost.
17:00 This shift finished. The status was "LOCKING".
This is the first time trouble for this cryocooler. According to the past same troubles in IX, the increase to 120K took one day, and it started cooling just after peaking, and it took 5 days to reach 90K. Of course, there are no guarantees for recovery.
EY_4K_REF3_50K temp is now over 110K. Then the temperature of EY_4K_REF4_50K also exceeded the critical temperature. So slight H2O is expected to be released.
The vacuum level at the EYT side slightly increased from 3.6 to 4.1 x 10^-6 Pa. No changes in vacuum level at the EYGV side. No serious temp changes in the duct shields.
One concern is the temperature of EY_80_SHIELD_TOP is now increasing, and we should avoid it will over the critical temperature.
I heard from Kimura-san that a battery alarm has been triggered on the UPS around the EYA chamber.
It might to be a UPS related to the EYA Tcam server.
It is unclear at this point whether it is related to the current trouble.
The replacement battery arrived last week and is ready to be installed.
>So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).
I agree.
Since the rebooter is connected to the UPS, there is also a suspicion that something might be wrong with the UPS.
Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)
Check Items:
VAC: No issues were found.
CRY cooler:
10:00 We contacted Ushiba-san and Kimura-san because we found a 10K difference in temperature checks. (Fig1)
Kimura-san checked Q-Mass and said no urgent action was needed. (Fig2)
16:00 Issue could not be confirmed. (Fig3, Fig4)
Compressor: No issues were found.
IFO state (JST):
09:00 The shift was started. The status was "OBSERVING".
17:00 This shift was finished. The status was "OBSERVING".
Problem with EY-TCam images not displaying in control room (Klog #34542)
EY_4K_REF3_50K temp is now over 110K. Then the temperature of EY_4K_REF4_50K also exceeded the critical temperature. So slight H2O is expected to be released.
The vacuum level at the EYT side slightly increased from 3.6 to 4.1 x 10^-6 Pa. No changes in vacuum level at the EYGV side. No serious temp changes in the duct shields.
One concern is the temperature of EY_80_SHIELD_TOP is now increasing, and we should avoid it will over the critical temperature.
This is the first time trouble for this cryocooler. According to the past same troubles in IX, the increase to 120K took one day, and it started cooling just after peaking, and it took 5 days to reach 90K. Of course, there are no guarantees for recovery.
> Is it possible to mask the edge of 1.5mm from the measurement results of HCB surface of sapphire ears?
Yes, that's possible. I'm currently waiting for my software account to be prepared.
I’ve prepared a detailed report.
(I’ll update it once I receive the account and apply an appropriate mask to the ear data.)
Since the rebooter is connected to the UPS, there is also a suspicion that something might be wrong with the UPS.
>So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).
I agree.
I heard from Kimura-san that a battery alarm has been triggered on the UPS around the EYA chamber.
It might to be a UPS related to the EYA Tcam server.
It is unclear at this point whether it is related to the current trouble.
The replacement battery arrived last week and is ready to be installed.
Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)
Check Items:
VAC: No issues were found.
CRY cooler: No issues were found.
Compressor: No issues were found.
IFO state (JST):
09:00 Due to the work done the previous day,
OBS INTENT was "OFF", INTERFEROMETER STATUS was "CALIB NOT READY", and LENGTH SENSING CONTROL was "OBSERVATION".
14:37 Ushiba-san turns "ON" OBS INTENT and is now in "OBSERVING" status. (Klog #34540)
17:00 This shift was finished. The status was "OBSERVING".
We turned on the OBS INTENT around 14:37 JST.
IFO guardian request state was somehow changed to READY around the noon yesterday, so we requested OBSERVING state to IFO guardian.
Since IFO guardian state should be changed according with the other guardian state and channel values, please do not change IFO guardian state manually.
After clearing SDF differences, I checked various changes since 9pm yesterday.
There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).
Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.
I accepted the following differences of SDFs again. Although the screenshot was not recorded, the accepted or reverted channels are indicated by a red box in the attached figure.
Channels in JGW-L2314962
It's related to klog#34536
They were updated based on the latest value of optical gain of DARM and ETMX actuator efficiencies in klog#34533.
Changes were accepted on observation.snap (Fig.1), down.snap (Fig.2), and safe.snap (Fig.3).
Finally, numerical rounding errors were reverted after re-loading observation.snap as shown in Fig.4.
I updated line tracking parameters (JGW-L2314962).
All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).
Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.
I updated line tracking parameters (JGW-L2314962).
All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).
Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.
After clearing SDF differences, I checked various changes since 9pm yesterday.
There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).
Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.
We accepted the SDFs reported on klog#34530.
CALEX, CALEY
K1:CAL-PCAL_{EX,EY}_TCAM_{MAIN,PATH1,PATH2}_{X,Y}
A CAL Tcam session was performed to obtain beam position information necessary for Pcal. The parameters have already been updated, and SDF has alreadly been accepted.
Operator: DanChen, ShingoHido
Update Time: 2025/07/11 18:06:51
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EX_TCAM_PATH1_X | 3.20691 mm | 3.26692 mm | +0.06001 mm |
K1:CAL-PCAL_EX_TCAM_PATH1_Y | 62.78539 mm | 62.67463 mm | -0.11076 mm |
K1:CAL-PCAL_EX_TCAM_PATH2_X | -0.21958 mm | -0.12804 mm | +0.09154 mm |
K1:CAL-PCAL_EX_TCAM_PATH2_Y | -63.36743 mm | -63.39445 mm | -0.02702 mm |
Update Time: 2025/07/11 18:07:16
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EX_TCAM_MAIN_X | 3.62390 mm | 3.38665 mm | -0.23725 mm |
K1:CAL-PCAL_EX_TCAM_MAIN_Y | 11.89945 mm | 12.36073 mm | +0.46128 mm |
Update Time: 2025/07/11 18:07:39
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EY_TCAM_PATH1_X | 1.27965 mm | 1.51586 mm | +0.23621 mm |
K1:CAL-PCAL_EY_TCAM_PATH1_Y | 63.48267 mm | 63.81734 mm | +0.33467 mm |
K1:CAL-PCAL_EY_TCAM_PATH2_X | -0.34771 mm | -0.45437 mm | -0.10666 mm |
K1:CAL-PCAL_EY_TCAM_PATH2_Y | -71.06132 mm | -70.58075 mm | +0.48057 mm |
Update Time: 2025/07/11 18:07:56
EPICS Key | Before [mm] | After [mm] | Δ (After - Before) [mm] |
---|---|---|---|
K1:CAL-PCAL_EY_TCAM_MAIN_X | 8.78471 mm | 8.41675 mm | -0.36796 mm |
K1:CAL-PCAL_EY_TCAM_MAIN_Y | -3.91830 mm | -4.09518 mm | -0.17689 mm |
[Yokozawa, Yuzurihara]
We performed the lockloss investigation for the recent lockloss of 2025-07-10 10:53:19.187500 UTC. The previous lockloss investigation was posted in klog34259. This is the longest lock in the O4c by now.
Although I check all the lockloss phenomena which were reported in the past klog, those phenomena except for the OMC DCPD saturation did not occur just before this lock loss.
The OMC saturation occurred just before the lockloss. But, the quickness of the saturation seems to be different from the past saturations. I and Yokozawa-san checked the timeseires and listed up the possible cause of the saturation.
At this lockloss, we can see the OMC saturation, which we reported many times in the past klog. This will be the direct cause of the lockloss. I and Yokozawa-san checked the original cause of the OMC saturation.
One important difference between this saturation and past saturation is the quickness to reach the saturation. As seen in Figure9 and Figure10, the DCPD signal reached the saturation within 1 ms, which is so quick!
We are thinking it's difficult to make this quick saturation by the suspension. So, we guess it might be associated with the electrical signal or glitch.
In the past saturation, we could see the several oscillation just before reaching the saturation as shown in Figure 11 or this. This might be a hint to investigate the cause more.
Anyway, this was new phenomenon for us (or I missed this phenomenon...).