Reports of 32006
CRY (Cryostat IX)
shinji.miyoki - 7:16 Wednesday 16 July 2025 (34569) Print this report
IX REF1 cooler mulfunction

IX_REF1_4K/50K temperatures drastically increased, as shown in Fig. 1. We will continue to monitor. As you know, REF2/4 coryocoolers are also not working in IX. So the situation is getting worse. REF3 is the only cryocooler that is working well in IX.

The vacuum level enhancements came  from N2, CO2, and Ar from the REF4_4K_HEAD because its temperature exceeded ~ 28K.

Images attached to this report
OBS (Summary)
shinji.miyoki - 7:11 Wednesday 16 July 2025 (34568) Print this report
Comment to Operation shift summary (34547)

EY_4K_REF3_50K temp seems to continue decreasing.

Images attached to this comment
MIF (Noise Budget)
kenta.tanaka - 17:47 Tuesday 15 July 2025 (34567) Print this report
TF measurement and Noise projection from IMC WFS to DARM by shaking IP1 and IP2

Ushiba, Tanaka

We performed TF measurements from IMC WFSs to DARM by shaking IP1 and IP2 PZTs. However, since many earthquakes were occured during the today's commissioning time, we could not performed the measurement from IP2 YAW to DARM. As for other DoFs, we did them. Fig.1-3 show the result. 

Then, we performed the noise projection from IMC WFS1 PIT/YAW to DARM by using TF results from WFS1 to DARM by shaking IP1. Fig. 4 shows the result, red curve is the DARM spectrum. blue curve is the projection of WFS1 PIT and green curve is the projection of WFS1 YAW  According to the projection result, some peaks, for example ~152 Hz, ~281 Hz, seem to be cauesd by the jitter in PSL. However, the some peaks' height, especially around 330 Hz and 360 Hz seem to surpass the DARM spectrum. The one of possibilties is that the coherences from WFS to DARM by shaking IPs are not enough at the first place. So we may need to increase the coherece by increasing the excitation amplitude or the average time. Or the other is that WFS1 signal is not decoupled among their WFSs. So we may use the decouled sensor (DOF{4,5}_{P,Y}_IN1) instead of WFS1 and WFS2.

 

Images attached to this report
OBS (Summary)
takashi.uchiyama - 17:00 Tuesday 15 July 2025 (34561) Print this report
Operation shift summary
Operator's name: Uchiyama, Takahashi
Shift time: 9-17 (JST)
*Commissioning day

Check Items:
VAC: No issues were found. (Due to the temperature increase of the IX REF1 cryocooler, the vacuum pressure reached 10^-5Pa in the MICH area. )
CRY cooler:
10:00 IX: 4K_REF1_50K_HEAD (84.36 -> 110.47K) Informed to KImura, Ushiba
Consequently, the elevated temperature caused the evaporation of N2, O2, Ar, and CO2 from the inner shield, and the vacuum pressure around IX was also increased. Notably, there has been no detection of a serious excess of H2O. Fig.1

16:00 IX: 4K_REF1_50K_HEAD (107.42 -> 122.73K) Informed to KImura, Ushiba
There has been no detection of a serious excess of H2O.

Compressor: No issues were found.
GAS filter output: No issues were found.

IFO state (JST):
09:00 The shift started. The status was "LOCKING". The OBS INTENT was OFF for the commissioning work.
 During commissioning time, there were so many earthquakes. The earthquakes broke the IFO lock and disturbed measurements.
14:28 OBS intent has been turned on. -> "OBSERVING"
15:00 Lock lost
15:34 "OBSERVING"
17:00 This shift finished. "OBSERVING"
Images attached to this report
MIF (General)
Shingo Hido - 15:12 Tuesday 15 July 2025 (34565) Print this report
OLTF measurement on July 15 after the site work

I measured OLTF of DARM, MICH, and PRCL (fig1-3).
There is no significant difference on the OLTF, so the calibration should be fine.


Compared to the recent OLTF (klog#34412 and klog#34533), the deviations are within approximately 0.3 dB for MICH and PRCL, and within 0.1 dB for DARM.

Images attached to this report
CRY (General)
shinji.miyoki - 15:04 Tuesday 15 July 2025 (34566) Print this report
Comment to IM Temperature adjustment (34559)

I recovered the voltage settings for IM heaters for ITMY and ETMX around 14:40 because OBS has started.

OBS (General)
kenta.tanaka - 14:34 Tuesday 15 July 2025 (34564) Print this report
Comment to Set observing bit (34099)

We turned on the OBS INTENT around 14:29 JST after today's calibration check by Hido-kun.

DGS (General)
dan.chen - 12:52 Tuesday 15 July 2025 (34563) Print this report
Comment to EY TCam server seems to be unreachable (34542)

Time stamp:
0948: Entered the tunnel.
0951: Entered center front room
0951: Entered center experimental area
0955: Entered Yarm
1011: Entered Yend
1051: Out from Yend
1108: Entered center experimental area
1112: Entered parking area at center
1115: Went out tunnel and closed the shutter of the tunnel entrance.

DGS (General)
dan.chen - 12:48 Tuesday 15 July 2025 (34562) Print this report
Comment to EY TCam server seems to be unreachable (34542)

With Satoru Ikeda

Summary:
On-site investigation revealed that the connectivity issue of the EYA TCam server was caused by a faulty UPS. We removed the UPS and connected the rebooter directly to a 100 VAC outlet, which successfully restored the EYA TCam functionality.

Details:

  • We physically visited the EYA booth to inspect the UPS and server setup.

  • The UPS showed no visual indicators or alarm sounds.

  • We confirmed that the UPS input power was normal (100 VAC present).

  • We removed the UPS from the setup and attempted a reset with the connected load (rebooter) disconnected, but there was no reaction from the UPS—no LEDs, no sound.

  • Based on this, we judged that the UPS was faulty and did not proceed with battery replacement.

  • We brought the UPS back to Mozumi for further inspection.

  • The rebooter (to which the EY TCam server is connected) was plugged directly into a 100 VAC outlet.

  • After power-on, the TCam server booted normally. We verified that TCam image capture was functional.

Next Steps:

  • The UPS has alreadly be moved to Mozumi. We will check it later but, it is likely that it needs to be replaced.

CRY (Cryo-payload EX)
tomotada.akutsu - 9:04 Tuesday 15 July 2025 (34560) Print this report
Comment to OPLEV value changes check at EX (34556)

There have been many mysteries of this kind for oplev sums since iKAGRA. One option to resolve this phenomenon would be to set a direct power monitor just after the collimator of the oplev light source.

CRY (General)
shinji.miyoki - 6:57 Tuesday 15 July 2025 (34559) Print this report
IM Temperature adjustment

Because of high seismic noise due to several big earthquakes, a swarm of earthquakes, and a typhoon (Fig.4), the IFO lock was almost not realized last night. Therefore, the IM temps of ITMY and ETMX decreased significantly. I kept adjusting the current of the heaters for them. The attached values are the original settings before the adjustments. 

Fig.1 and 2 show the original settings for ITMY and ETMX. I increased by 2 Volts for each for several hours. After that, I reduced by 1 volt for each. So the present voltages are larger than 1 volt for each. 

I will recover the original setting values after confirming the stable, continuous lock.

Images attached to this report
Comments to this report:
shinji.miyoki - 15:04 Tuesday 15 July 2025 (34566) Print this report

I recovered the voltage settings for IM heaters for ITMY and ETMX around 14:40 because OBS has started.

OBS (Summary)
shinji.miyoki - 6:41 Tuesday 15 July 2025 (34558) Print this report
Comment to Operation shift summary (34547)

EY_4K_REF3_50K temp seems to start a slight decrease. While the outer shield (EY_80K_SHIELD_TOP) temp keeps increasing. We need several days of monitoring.

Images attached to this comment
CRY (Cryo-payload EX)
shinji.miyoki - 6:38 Tuesday 15 July 2025 (34556) Print this report
OPLEV value changes check at EX

I checked the OPLEV sum for TM and MN for the last~ one month.

TM OPLEV seems to have ~ day-level oscillation, with a half-day level small oscillation.  Is it related with tidal motion? The total value seems to decrease gradually.

MN OPLEV seems to have ~ week level oscillation.

Images attached to this report
Comments to this report:
tomotada.akutsu - 9:04 Tuesday 15 July 2025 (34560) Print this report

There have been many mysteries of this kind for oplev sums since iKAGRA. One option to resolve this phenomenon would be to set a direct power monitor just after the collimator of the oplev light source.

MIF (General)
shinji.miyoki - 23:23 Monday 14 July 2025 (34557) Print this report
Peak around 110Hz is large enough recently

Recently, (from Sunday) the peak around 110Hz is large enough. Do we need beam position adjustment?

Images attached to this report
OBS (Summary)
ryutaro.takahashi - 17:00 Monday 14 July 2025 (34555) Print this report
Operation shift summary

Operator's name: Kimura, Takahashi
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: 
  10:00 We found a 10K difference in 4K_REF3_50K_HEAD/BAR (Fig.1).
              Kimura-san checked Q-Mass (Fig.2).
  16:00 Temperature drift became small (Fig.3).
              Kimura-san checked Q-Mass (Fig.4).
Compressor: No issues were found.

IFO state (JST):

09:00 The shift started. The status was "OBSERVING".
10:10 Lock lost.
10:34 "OBSERVING"
13:38 Lock lost.
14:42 "OBSERVING"
14:57 Lock lost.
17:00 This shift finished. The status was "LOCKING".

Images attached to this report
OBS (Summary)
shinji.miyoki - 9:51 Monday 14 July 2025 (34553) Print this report
Comment to Operation shift summary (34547)

This is the first time trouble for this cryocooler.  According to the past same troubles in IX, the increase to 120K took one day, and it started cooling just after peaking, and it took 5 days to reach 90K. Of course, there are no guarantees for recovery.

OBS (Summary)
shinji.miyoki - 8:57 Monday 14 July 2025 (34552) Print this report
Comment to Operation shift summary (34547)

EY_4K_REF3_50K temp is now over 110K. Then the temperature of EY_4K_REF4_50K also exceeded the critical temperature. So slight H2O is expected to be released.

The vacuum level at the EYT side slightly increased from 3.6 to 4.1 x 10^-6 Pa. No changes in vacuum level at the EYGV side. No serious temp changes in the duct shields.

One concern is the temperature of EY_80_SHIELD_TOP is now increasing, and we should avoid it will over the critical temperature.

Images attached to this comment
DGS (General)
dan.chen - 6:21 Monday 14 July 2025 (34551) Print this report
Comment to EY TCam server seems to be unreachable (34542)

I heard from Kimura-san that a battery alarm has been triggered on the UPS around the EYA chamber.
It might to be a UPS related to the EYA Tcam server.
It is unclear at this point whether it is related to the current trouble.
The replacement battery arrived last week and is ready to be installed.

DGS (General)
shinji.miyoki - 22:05 Sunday 13 July 2025 (34550) Print this report
Comment to EY TCam server seems to be unreachable (34542)

>So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).

I agree.

DGS (General)
satoru.ikeda - 17:56 Sunday 13 July 2025 (34549) Print this report
Comment to EY TCam server seems to be unreachable (34542)

Since the rebooter is connected to the UPS, there is also a suspicion that something might be wrong with the UPS.

OBS (Summary)
koji.nakagaki - 17:16 Sunday 13 July 2025 (34547) Print this report
Operation shift summary

Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: 
  10:00 We contacted Ushiba-san and Kimura-san because we found a 10K difference in temperature checks. (Fig1)
               Kimura-san checked Q-Mass and said no urgent action was needed. (Fig2)
  16:00 Issue could not be confirmed. (Fig3, Fig4)

Compressor: No issues were found.

IFO state (JST):

09:00 The shift was started. The status was "OBSERVING".
17:00 This shift was finished. The status was "OBSERVING".

Problem with EY-TCam images not displaying in control room (Klog #34542)

Images attached to this report
Comments to this report:
shinji.miyoki - 8:57 Monday 14 July 2025 (34552) Print this report

EY_4K_REF3_50K temp is now over 110K. Then the temperature of EY_4K_REF4_50K also exceeded the critical temperature. So slight H2O is expected to be released.

The vacuum level at the EYT side slightly increased from 3.6 to 4.1 x 10^-6 Pa. No changes in vacuum level at the EYGV side. No serious temp changes in the duct shields.

One concern is the temperature of EY_80_SHIELD_TOP is now increasing, and we should avoid it will over the critical temperature.

Images attached to this comment
shinji.miyoki - 9:51 Monday 14 July 2025 (34553) Print this report

This is the first time trouble for this cryocooler.  According to the past same troubles in IX, the increase to 120K took one day, and it started cooling just after peaking, and it took 5 days to reach 90K. Of course, there are no guarantees for recovery.

shinji.miyoki - 6:41 Tuesday 15 July 2025 (34558) Print this report

EY_4K_REF3_50K temp seems to start a slight decrease. While the outer shield (EY_80K_SHIELD_TOP) temp keeps increasing. We need several days of monitoring.

Images attached to this comment
shinji.miyoki - 7:11 Wednesday 16 July 2025 (34568) Print this report

EY_4K_REF3_50K temp seems to continue decreasing.

Images attached to this comment
DGS (General)
takahiro.yamamoto - 14:19 Sunday 13 July 2025 (34546) Print this report
Comment to EY TCam server seems to be unreachable (34542)
Hirose-san tried to recover EY TCam, but it didn't come back. So I checked the situation and found that not only the EY TCam server but also the rebooter were unreachable. A server for the laser control of EY Pcal and its rebooter which were connected a same network switch at the EYA booth are reachable. So the network around EYA has no problem. I'm not sure there is a way to recover a trouble on the rebooter itself remotely. If no way, we need to go to EYA for recovering them.

It is not so urgent unless we get into a situation that requires initial alignment. So we can probably wait until weekday. But if we will leave it until Friday, a time table of maintenance tasks will delay roughly 1 hour because TCam photo sessions must wait a recovering work and going to EYA will be probably done after the weekly calibration measurement. Hopefully I want to avoid an extension of whole maintenance time because completing all works by 22:00 is already tight schedule. I guess that the recovering work can be parallelly done with another commissioning activities. So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).
CRY (Cryo-payload R&D)
dan.chen - 6:19 Sunday 13 July 2025 (34543) Print this report
Comment to Surface figure measurements for sapphire components (34489)

> Is it possible to mask the edge of 1.5mm from the measurement results of HCB surface of sapphire ears?
Yes, that's possible. I'm currently waiting for my software account to be prepared.

I’ve prepared a detailed report.
(I’ll update it once I receive the account and apply an appropriate mask to the ear data.)
 

Non-image files attached to this comment
DGS (General)
takahiro.yamamoto - 23:51 Saturday 12 July 2025 (34542) Print this report
EY TCam server seems to be unreachable
Dear tomorrow's shifters,

According to the beam spot archiver, auto-recovery for TCam viewer isn't working for EY.
EY TCam was alive at least until 11:30 JST.
But it cannot be seen since 11:40 JST.

If it's a problem on Vinagre on k1mon4, it should be recovered automatically.
So it seems to be a problem on PlanetaryImager on cam-eya.
It can be recovered by the procedure of Sec-1.1.5 of DGS manual (JGW-G2516704).
Please try to do it.

-----
We can see that the image layout and resolution has been changed between before 7/11 17:40 JST and after 7/11 17:50 JST.

In usual, camera viewer shows an image with 1/4 resolution with camera specification for reducing a network trafic and CPU load. It's important to avoid a hung up of camera server and/or applications. On the other hand, an image with 1/1 resolution is taken in the TCam session. We know image resolution sometimes is not reverted to 1/4 resolution. (I never reproduced this phenomena by manual request to guardian. So I wonder it comes form the compatibility of TCam guardian code and camera session script.) Anyway, this hang-up seemed to be caused by the large network trafic and heavy CPU load due to un-reverted image resolution after camera session.
Comments to this report:
takahiro.yamamoto - 14:19 Sunday 13 July 2025 (34546) Print this report
Hirose-san tried to recover EY TCam, but it didn't come back. So I checked the situation and found that not only the EY TCam server but also the rebooter were unreachable. A server for the laser control of EY Pcal and its rebooter which were connected a same network switch at the EYA booth are reachable. So the network around EYA has no problem. I'm not sure there is a way to recover a trouble on the rebooter itself remotely. If no way, we need to go to EYA for recovering them.

It is not so urgent unless we get into a situation that requires initial alignment. So we can probably wait until weekday. But if we will leave it until Friday, a time table of maintenance tasks will delay roughly 1 hour because TCam photo sessions must wait a recovering work and going to EYA will be probably done after the weekly calibration measurement. Hopefully I want to avoid an extension of whole maintenance time because completing all works by 22:00 is already tight schedule. I guess that the recovering work can be parallelly done with another commissioning activities. So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).
satoru.ikeda - 17:56 Sunday 13 July 2025 (34549) Print this report

Since the rebooter is connected to the UPS, there is also a suspicion that something might be wrong with the UPS.

shinji.miyoki - 22:05 Sunday 13 July 2025 (34550) Print this report

>So This Tuesday or Thursday seems to be a good day for recovering TCam (though I'm not sure a detailed work plan on this Tuesday and Thursday yet).

I agree.

dan.chen - 6:21 Monday 14 July 2025 (34551) Print this report

I heard from Kimura-san that a battery alarm has been triggered on the UPS around the EYA chamber.
It might to be a UPS related to the EYA Tcam server.
It is unclear at this point whether it is related to the current trouble.
The replacement battery arrived last week and is ready to be installed.

dan.chen - 12:48 Tuesday 15 July 2025 (34562) Print this report

With Satoru Ikeda

Summary:
On-site investigation revealed that the connectivity issue of the EYA TCam server was caused by a faulty UPS. We removed the UPS and connected the rebooter directly to a 100 VAC outlet, which successfully restored the EYA TCam functionality.

Details:

  • We physically visited the EYA booth to inspect the UPS and server setup.

  • The UPS showed no visual indicators or alarm sounds.

  • We confirmed that the UPS input power was normal (100 VAC present).

  • We removed the UPS from the setup and attempted a reset with the connected load (rebooter) disconnected, but there was no reaction from the UPS—no LEDs, no sound.

  • Based on this, we judged that the UPS was faulty and did not proceed with battery replacement.

  • We brought the UPS back to Mozumi for further inspection.

  • The rebooter (to which the EY TCam server is connected) was plugged directly into a 100 VAC outlet.

  • After power-on, the TCam server booted normally. We verified that TCam image capture was functional.

Next Steps:

  • The UPS has alreadly be moved to Mozumi. We will check it later but, it is likely that it needs to be replaced.

dan.chen - 12:52 Tuesday 15 July 2025 (34563) Print this report

Time stamp:
0948: Entered the tunnel.
0951: Entered center front room
0951: Entered center experimental area
0955: Entered Yarm
1011: Entered Yend
1051: Out from Yend
1108: Entered center experimental area
1112: Entered parking area at center
1115: Went out tunnel and closed the shutter of the tunnel entrance.

OBS (Summary)
koji.nakagaki - 17:18 Saturday 12 July 2025 (34541) Print this report
Operation shift summary

Operators name: Hirose, Nakagaki
Shift time: 9-17 (JST)

Check Items:

VAC: No issues were found.
CRY cooler: No issues were found.
Compressor: No issues were found.

IFO state (JST):

09:00 Due to the work done the previous day, 
  OBS INTENT was "OFF", INTERFEROMETER STATUS was "CALIB NOT READY", and LENGTH SENSING CONTROL was "OBSERVATION".

   (Klog #34533) (Klog #34539)

14:37 Ushiba-san turns "ON" OBS INTENT and is now in "OBSERVING" status. (Klog #34540)
17:00 This shift was finished. The status was "OBSERVING".

Search Help
×

Warning

×