Reports of 31994
MIF (General)
shinji.miyoki - 23:23 Monday 14 July 2025 (34557) Print this report
Peak around 110Hz is large enough recently

Recently, (from Sunday) the peak around 110Hz is large enough. Do we need beam position adjustment?

Images attached to this report
CRY (Cryo-payload EX)
shinji.miyoki - 22:09 Monday 14 July 2025 (34556) Print this report
OPLEV value changes check at EX

I checked the OPLEV sum for TM and MN for the last~ one 1 month.

TM OPLEV seems to have ~ day-level oscillation, with a half-day level small oscillation.  Is it related with tidal motion? The total value seems to decrease gradually.

MN OPLEV seems to have ~ week level oscillation.

Images attached to this report
OBS (Summary)
ryutaro.takahashi - 17:00 Monday 14 July 2025 (34555) Print this report
Operation shift summary

Operator's name: Kimura, Takahashi
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: 
  10:00 We found a 10K difference in 4K_REF3_50K_HEAD/BAR (Fig.1).
              Kimura-san checked Q-Mass (Fig.2).
  16:00 Temperature drift became small (Fig.3).
              Kimura-san checked Q-Mass (Fig.4).
Compressor: No issues were found.

IFO state (JST):

09:00 The shift started. The status was "OBSERVING".
10:10 Lock lost.
10:34 "OBSERVING"
13:38 Lock lost.
14:42 "OBSERVING"
14:57 Lock lost.
17:00 This shift finished. The status was "LOCKING".

Images attached to this report
OBS (Summary)
shinji.miyoki - 9:51 Monday 14 July 2025 (34553) Print this report
Comment to Operation shift summary (34547)

This is the first time trouble for this cryocooler.  According to the past same troubles in IX, the increase to 120K took one day, and it started cooling just after peaking, and it took 5 days to reach 90K. Of course, there are no guarantees for recovery.

OBS (Summary)
shinji.miyoki - 8:57 Monday 14 July 2025 (34552) Print this report
Comment to Operation shift summary (34547)

EY_4K_REF3_50K temp is now over 110K. Then the temperature of EY_4K_REF4_50K also exceeded the critical temperature. So slight H2O is expected to be released.

The vacuum level at the EYT side slightly increased from 3.6 to 4.1 x 10^-6 Pa. No changes in vacuum level at the EYGV side. No serious temp changes in the duct shields.

One concern is the temperature of EY_80_SHIELD_TOP is now increasing, and we should avoid it will over the critical temperature.

Images attached to this comment
DGS (General)
dan.chen - 6:21 Monday 14 July 2025 (34551) Print this report
Comment to EY TCam server seems to be unreachable (34542)

I heard from Kimura-san that a battery alarm has been triggered on the UPS around the EYA chamber.
It might to be a UPS related to the EYA Tcam server.
It is unclear at this point whether it is related to the current trouble.
The replacement battery arrived last week and is ready to be installed.

DGS (General)
shinji.miyoki - 22:05 Sunday 13 July 2025 (34550) Print this report
Comment to EY TCam server seems to be unreachable (34542)

>So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).

I agree.

DGS (General)
satoru.ikeda - 17:56 Sunday 13 July 2025 (34549) Print this report
Comment to EY TCam server seems to be unreachable (34542)

Since the rebooter is connected to the UPS, there is also a suspicion that something might be wrong with the UPS.

OBS (Summary)
koji.nakagaki - 17:16 Sunday 13 July 2025 (34547) Print this report
Operation shift summary

Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: 
  10:00 We contacted Ushiba-san and Kimura-san because we found a 10K difference in temperature checks. (Fig1)
               Kimura-san checked Q-Mass and said no urgent action was needed. (Fig2)
  16:00 Issue could not be confirmed. (Fig3, Fig4)

Compressor: No issues were found.

IFO state (JST):

09:00 The shift was started. The status was "OBSERVING".
17:00 This shift was finished. The status was "OBSERVING".

Problem with EY-TCam images not displaying in control room (Klog #34542)

Images attached to this report
Comments to this report:
shinji.miyoki - 8:57 Monday 14 July 2025 (34552) Print this report

EY_4K_REF3_50K temp is now over 110K. Then the temperature of EY_4K_REF4_50K also exceeded the critical temperature. So slight H2O is expected to be released.

The vacuum level at the EYT side slightly increased from 3.6 to 4.1 x 10^-6 Pa. No changes in vacuum level at the EYGV side. No serious temp changes in the duct shields.

One concern is the temperature of EY_80_SHIELD_TOP is now increasing, and we should avoid it will over the critical temperature.

Images attached to this comment
shinji.miyoki - 9:51 Monday 14 July 2025 (34553) Print this report

This is the first time trouble for this cryocooler.  According to the past same troubles in IX, the increase to 120K took one day, and it started cooling just after peaking, and it took 5 days to reach 90K. Of course, there are no guarantees for recovery.

DGS (General)
takahiro.yamamoto - 14:19 Sunday 13 July 2025 (34546) Print this report
Comment to EY TCam server seems to be unreachable (34542)
Hirose-san tried to recover EY TCam, but it didn't come back. So I checked the situation and found that not only the EY TCam server but also the rebooter were unreachable. A server for the laser control of EY Pcal and its rebooter which were connected a same network switch at the EYA booth are reachable. So the network around EYA has no problem. I'm not sure there is a way to recover a trouble on the rebooter itself remotely. If no way, we need to go to EYA for recovering them.

It is not so urgent unless we get into a situation that requires initial alignment. So we can probably wait until weekday. But if we will leave it until Friday, a time table of maintenance tasks will delay roughly 1 hour because TCam photo sessions must wait a recovering work and going to EYA will be probably done after the weekly calibration measurement. Hopefully I want to avoid an extension of whole maintenance time because completing all works by 22:00 is already tight schedule. I guess that the recovering work can be parallelly done with another commissioning activities. So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).
CRY (Cryo-payload R&D)
dan.chen - 6:19 Sunday 13 July 2025 (34543) Print this report
Comment to Surface figure measurements for sapphire components (34489)

> Is it possible to mask the edge of 1.5mm from the measurement results of HCB surface of sapphire ears?
Yes, that's possible. I'm currently waiting for my software account to be prepared.

I’ve prepared a detailed report.
(I’ll update it once I receive the account and apply an appropriate mask to the ear data.)
 

Non-image files attached to this comment
DGS (General)
takahiro.yamamoto - 23:51 Saturday 12 July 2025 (34542) Print this report
EY TCam server seems to be unreachable
Dear tomorrow's shifters,

According to the beam spot archiver, auto-recovery for TCam viewer isn't working for EY.
EY TCam was alive at least until 11:30 JST.
But it cannot be seen since 11:40 JST.

If it's a problem on Vinagre on k1mon4, it should be recovered automatically.
So it seems to be a problem on PlanetaryImager on cam-eya.
It can be recovered by the procedure of Sec-1.1.5 of DGS manual (JGW-G2516704).
Please try to do it.

-----
We can see that the image layout and resolution has been changed between before 7/11 17:40 JST and after 7/11 17:50 JST.

In usual, camera viewer shows an image with 1/4 resolution with camera specification for reducing a network trafic and CPU load. It's important to avoid a hung up of camera server and/or applications. On the other hand, an image with 1/1 resolution is taken in the TCam session. We know image resolution sometimes is not reverted to 1/4 resolution. (I never reproduced this phenomena by manual request to guardian. So I wonder it comes form the compatibility of TCam guardian code and camera session script.) Anyway, this hang-up seemed to be caused by the large network trafic and heavy CPU load due to un-reverted image resolution after camera session.
Comments to this report:
takahiro.yamamoto - 14:19 Sunday 13 July 2025 (34546) Print this report
Hirose-san tried to recover EY TCam, but it didn't come back. So I checked the situation and found that not only the EY TCam server but also the rebooter were unreachable. A server for the laser control of EY Pcal and its rebooter which were connected a same network switch at the EYA booth are reachable. So the network around EYA has no problem. I'm not sure there is a way to recover a trouble on the rebooter itself remotely. If no way, we need to go to EYA for recovering them.

It is not so urgent unless we get into a situation that requires initial alignment. So we can probably wait until weekday. But if we will leave it until Friday, a time table of maintenance tasks will delay roughly 1 hour because TCam photo sessions must wait a recovering work and going to EYA will be probably done after the weekly calibration measurement. Hopefully I want to avoid an extension of whole maintenance time because completing all works by 22:00 is already tight schedule. I guess that the recovering work can be parallelly done with another commissioning activities. So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).
satoru.ikeda - 17:56 Sunday 13 July 2025 (34549) Print this report

Since the rebooter is connected to the UPS, there is also a suspicion that something might be wrong with the UPS.

shinji.miyoki - 22:05 Sunday 13 July 2025 (34550) Print this report

>So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).

I agree.

dan.chen - 6:21 Monday 14 July 2025 (34551) Print this report

I heard from Kimura-san that a battery alarm has been triggered on the UPS around the EYA chamber.
It might to be a UPS related to the EYA Tcam server.
It is unclear at this point whether it is related to the current trouble.
The replacement battery arrived last week and is ready to be installed.

OBS (Summary)
koji.nakagaki - 17:18 Saturday 12 July 2025 (34541) Print this report
Operation shift summary

Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: No issues were found.
Compressor: No issues were found.

IFO state (JST):

09:00 Due to the work done the previous day, 
  OBS INTENT was "OFF", INTERFEROMETER STATUS was "CALIB NOT READY", and LENGTH SENSING CONTROL was "OBSERVATION".

   (Klog #34533) (Klog #34539)

14:37 Ushiba-san turns "ON" OBS INTENT and is now in "OBSERVING" status. (Klog #34540)
17:00 This shift was finished. The status was "OBSERVING".

OBS (General)
takahiro.yamamoto - 15:46 Saturday 12 July 2025 (34539) Print this report
Lesson for future maintenance and commissioning works
Because of multiple human errors, we couldn't back to OBSERVING for a while.
To prevent repeated same mistakes, I summarized what was bad and what must be improved.
Because partial commissioning will be resumed, it'll become a good lesson for many people who will commit commissioning activities.

Related klogs
- Trigger work: klog#34520
- Calibration works: klog#34530, klog#34533
- Initial notice: klog#34536
- Clearing work: klog#34537
- Confirmation: klog#34538
- Back to Obs.: klog#34540

points to be improved
1) SDF clearing work was done NOT in OBSERVATION on LSC_LOCK guardian.
=> We must ensure that all set point values in observation.snap are accepted as ones in OBSERVATION state.

2) A procedure of loading down.snap/observation.snap cycle was skipped in the clearing work.
=> A loading cycle is necessary to mitigate accidental SDF issues.
Though a main purpose of this procedure is to avoid the SDF issues due to numerical rounding errors, it can also prevent a mistakes such as above.

3) A calibration measurement was started with insufficient checks.
=> Calibration measurement must be started after confirming that CALIB_NOT_READY or READY on IFO guardian is reachable.
Though a problem this time didn't affect calibration measurement fortunately, there was some possibility to be required a re-measurement in the worst case.
If sufficient checks were done, we was probably able to find a problem in ~2hrs. earlier.

4) Request state of IFO guardian was changed in mistake (though it's not directly related to the issue in this time).
=> A request to the interferometer must be done via LSC_LOCK guardian.
There is no work to require the manual operation of IFO guardian except for a bug fix of IFO guardian itself.
OBS (General)
takafumi.ushiba - 14:46 Saturday 12 July 2025 (34540) Print this report
Comment to Set observing bit (34099)

We turned on the OBS INTENT around 14:37 JST.
IFO guardian request state was somehow changed to READY around the noon yesterday, so we requested OBSERVING state to IFO guardian.

Since IFO guardian state should be changed according  with the other guardian state and channel values, please do not change IFO guardian state manually.

CAL (General)
takahiro.yamamoto - 14:19 Saturday 12 July 2025 (34538) Print this report
Comment to Weekly calibration on 7/11 (34533)

After clearing SDF differences, I checked various changes since 9pm yesterday.

There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).

Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.

Images attached to this comment
OBS (SDF)
ryutaro.takahashi - 13:50 Saturday 12 July 2025 (34537) Print this report
Comment to Changes of observation.snap during O4c (34169)

I accepted the following differences of SDFs again. Although the screenshot was not recorded, the accepted or reverted channels are indicated by a red box in the attached figure.

Images attached to this comment
OBS (SDF)
takahiro.yamamoto - 20:27 Friday 11 July 2025 (34532) Print this report
Comment to Changes of observation.snap during O4c (34169)

K1CALCS

Channels in JGW-L2314962

It's related to klog#34536
They were updated based on the latest value of optical gain of DARM and ETMX actuator efficiencies in klog#34533.

Changes were accepted on observation.snap (Fig.1), down.snap (Fig.2), and safe.snap (Fig.3).
Finally, numerical rounding errors were reverted after re-loading observation.snap as shown in Fig.4.

Images attached to this comment
CAL (General)
takahiro.yamamoto - 20:27 Friday 11 July 2025 (34536) Print this report
Comment to Weekly calibration on 7/11 (34533)

I updated line tracking parameters (JGW-L2314962).

All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).

Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.

Images attached to this comment
OBS (SDF)
dan.chen - 20:10 Friday 11 July 2025 (34534) Print this report
Comment to Changes of observation.snap during O4c (34169)
We accepted SDFs related to the cal measurement in observation.snap, and safe.snap (k1calcs).
K1:CAL-MEAS_{CURRENT, LATEST}
Images attached to this comment
CAL (General)
dan.chen - 19:57 Friday 11 July 2025 (34533) Print this report
Weekly calibration on 7/11
CAL group
We did the calibration measurements and updated the parameters.

Estimated parameters in the Pre-maintenance measurements are as follows.
 H_etmxtm = 3.842050490e-14 @10Hz ( 0.06% from previous measurements)
 H_etmxim = 1.538272700e-14 @10Hz  ( 3.65% from previous measurements)
 Optical_gain = 2.080308644e+12   ( -2.17% from previous measurements)
 Cavity_pole = 18.275875408 Hz  ( 1.30% from previous measurements)

Previous values are listed in klog#34466.

Estimated parameters in the Post-maintenance measurements are as follows.
 H_etmxtm = 3.857020608e-14 @10Hz ( 0.39% from pre-maintenance measurements)
 H_etmxim = 1.603791210e-14 @10Hz  ( 4.26% from pre-maintenance measurements)
 Optical_gain = 2.075302086e+12   ( -0.24% from pre-maintenance measurements)
 Cavity_pole = 18.126316036 Hz  ( -0.82% from pre-maintenance measurements)

Fig.3 and Fig.4 show the fitting results.
Fig.1 and Fig.2 show the ratio of the sensing functions
Images attached to this report
Comments to this report:
takahiro.yamamoto - 20:27 Friday 11 July 2025 (34536) Print this report

I updated line tracking parameters (JGW-L2314962).

All detected changes are coming from the planned commissioning activities.
- Changes in foton Fig.1 are related to klog#34533 (k1calcs).
- No changes in guardian (Fig.2).
- Changes in SDF tables shown in Fig.3-4 are related to this klog#34515, klog#34532, klog#34534 (k1calcs), klog#34513, klog#34531 (k1calex, k1caley).
- No changes in the model (Fig.5).

Changes in SDF table by klog#34520 (k1sdfmanage) were not detected because it was accepted/reverted on down.snap instead of observation.snap. So SDF diffs are still remaining on k1sdfmanage. I checked that remaining differences are same as processed on down.snap in klog#34520 or not. Then I found some inconsistency between the processed differences and current values. So I cannot decide how should we process this remaining changes. So IFO guardian is still in LOCKED.

Images attached to this comment
takahiro.yamamoto - 14:19 Saturday 12 July 2025 (34538) Print this report

After clearing SDF differences, I checked various changes since 9pm yesterday.

There is no change in foton (Fig.1), guardian code (Fig.2), and models (Fig.3).
Changes in SDF tables come from klog#34537 (see also Fig.4).

Finally, I raised CFC_LACTCH and IFO guardian moved from CALIB_NOT_READY to READY.

Images attached to this comment
OBS (SDF)
dan.chen - 18:17 Friday 11 July 2025 (34531) Print this report
Comment to Changes of observation.snap during O4c (34169)

We accepted the SDFs reported on klog#34530.

CALEX, CALEY
K1:CAL-PCAL_{EX,EY}_TCAM_{MAIN,PATH1,PATH2}_{X,Y}

Images attached to this comment
CAL (General)
dan.chen - 18:16 Friday 11 July 2025 (34530) Print this report
K1:VIS-ETMX_GOOD_OPLEV_YAW

A CAL Tcam session was performed to obtain beam position information necessary for Pcal. The parameters have already been updated, and SDF has alreadly been accepted.

Operator: DanChen, ShingoHido

Update Time: 2025/07/11 18:06:51

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EX_TCAM_PATH1_X 3.20691 mm 3.26692 mm +0.06001 mm
K1:CAL-PCAL_EX_TCAM_PATH1_Y 62.78539 mm 62.67463 mm -0.11076 mm
K1:CAL-PCAL_EX_TCAM_PATH2_X -0.21958 mm -0.12804 mm +0.09154 mm
K1:CAL-PCAL_EX_TCAM_PATH2_Y -63.36743 mm -63.39445 mm -0.02702 mm

Update Time: 2025/07/11 18:07:16

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EX_TCAM_MAIN_X 3.62390 mm 3.38665 mm -0.23725 mm
K1:CAL-PCAL_EX_TCAM_MAIN_Y 11.89945 mm 12.36073 mm +0.46128 mm

Update Time: 2025/07/11 18:07:39

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EY_TCAM_PATH1_X 1.27965 mm 1.51586 mm +0.23621 mm
K1:CAL-PCAL_EY_TCAM_PATH1_Y 63.48267 mm 63.81734 mm +0.33467 mm
K1:CAL-PCAL_EY_TCAM_PATH2_X -0.34771 mm -0.45437 mm -0.10666 mm
K1:CAL-PCAL_EY_TCAM_PATH2_Y -71.06132 mm -70.58075 mm +0.48057 mm

Update Time: 2025/07/11 18:07:56

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EY_TCAM_MAIN_X 8.78471 mm 8.41675 mm -0.36796 mm
K1:CAL-PCAL_EY_TCAM_MAIN_Y -3.91830 mm -4.09518 mm -0.17689 mm

 

MIF (General)
hirotaka.yuzurihara - 17:19 Friday 11 July 2025 (34527) Print this report
lockloss investigation during O4c (2025-07-10 10:53:19.187500 UTC)

[Yokozawa, Yuzurihara]

We performed the lockloss investigation for the recent lockloss of 2025-07-10 10:53:19.187500 UTC. The previous lockloss investigation was posted in klog34259. This is the longest lock in the O4c by now.

Quick summary 

Although I check all the lockloss phenomena which were reported in the past klog, those phenomena except for the OMC DCPD saturation did not occur just before this lock loss.
The OMC saturation occurred just before the lockloss. But, the quickness of the saturation seems to be different from the past saturations. I and Yokozawa-san checked the timeseires and listed up the possible cause of the saturation. 

Details

  • There was no excess on seismic motion of 1~10 Hz. (Figure 1)
  • About the oscillation, we can see the oscillation on the PR2 pitch with 0.83 Hz. But, the amplitude is not large. (Figure2)
    • We can see the coincident oscillation (0.83 Hz) on the ASC PRC2 pitch and ASC MICH pitch. (Figure3)
  • There was small excess on PRM feedback signal. But, it will be small to cause the lockloss. (Figure4)
  • About the drift, there was no drift on Xarm and Yarm mirrors over 30 minutes. (Xarm, Yarm)
  • About the BPC control, there was no strange behavior just before the lockloss
  • About the saturation of control, there was no saturation on BS and ETMX.
  • There was no glitch on ETMY MN oplev.
  • There was no earthquake around the lockloss time.
    • We can see the large seismic motion in 3~10 Hz and 10~30 Hz. This excess ended 20 minutes later. But, this amplitude is not enough to cause the lockloss. (Figure)

Quick OMC saturation

At this lockloss, we can see the OMC saturation, which we reported many times in the past klog. This will be the direct cause of the lockloss. I and Yokozawa-san checked the original cause of the OMC saturation.
One important difference between this saturation and past saturation is the quickness to reach the saturation. As seen in Figure9 and Figure10, the DCPD signal reached the saturation within 1 ms, which is so quick!
We are thinking it's difficult to make this quick saturation by the suspension. So, we guess it might be associated with the electrical signal or glitch.

In the past saturation, we could see the several oscillation just before reaching the saturation as shown in Figure 11 or this. This might be a hint to investigate the cause more.
Anyway, this was new phenomenon for us (or I missed this phenomenon...).

Other findings

  • The volage monitor channel
    • While K1:PEM-VOLD_OMC_RACK_DC_M18_OUT_DQ is stable, K1:PEM-VOLD_OMC_RACK_DC_P18_OUT_DQ shows the drift behavior and it reset somehow. (Figure 12)
    • K1:PEM-VOLT_AS_TABLE_GND_OUT_DQ is almost stable but shows the unstable behavior at -5 hour and just before the lockloss.
    • It's highly important to check the ground condition around AS port and OMC rack.
  • PMC control
    • K1:PSL-PMC_LO_POWER_MON_OUT16 shows some jumps before the lockloss. It would be very helpful if the PMC expert could comment on this phenomenon.
Images attached to this report
OBS (Summary)
shoichi.oshino - 17:07 Friday 11 July 2025 (34529) Print this report
Operation shift summary
Operators name: Tomaru, Oshino
Shift time: 9-17 (JST)
Check Items:

VAC: No issues were found.
CRY cooler: No issues were found.
Compressor: No issues were found.

IFO state (JST):
09:00 The shift was started. The first half of the calibration work.
Maintenace work
17:00 This shift was finished. The last half of the calibration work.
Search Help
×

Warning

×