Reports of 27074
DGS (General)
takahiro.yamamoto - 22:12 Friday 03 May 2024 (29390) Print this report
Digital system came back online
This is recovery work from the trouble on klog#29361.
Digital system finally came back online. Now awgtpman also works fine.

-----
When I copied the NFS region (/opt) in order to salvage files, some file permissions were probably copied incompletely. rsync -a by root copies both permissions and owner but rsync -a by controls copies only permissions. Errors on file permission seem to make some delays which causes time out on launching awgtpman.

Now the copied disk by root is mounted at /opt, the copied disk by controls is mounted at /mnt/backup_controls, and original (partially broken) disk is mounted at /mnt/original.
Latest safe.snap is salvaged on /opt. Yesterday's changes except SDF exist only in /mnt/backup_controls. After cheking all files in /mnt/backup_controls, it should be removed and one more backup disk copied by root must be created.
LAS (bKAGRA laser)
takahiro.yamamoto - 21:57 Friday 03 May 2024 (29389) Print this report
Comment to Laser down again (29388)
According to the laser power monitor (K1:LAS-POW_FIB_DC_INMON) as shown in the top panel of Fig.1,
- [T1 cursol] a glitch occurs at 17:40 (maybe caused by restarting real-time model)
- [crosshair] data became strange at 17:42 (maybe caused by stoppting DAQ stream)
- [t=0] power goes to 0W at 18:30.

After 17:42, past data is not reliable because DAQ stream was stopped. But PMC guardian which read EPICS record directly went to FAULT around 18:30 (see also the middle panel of Fig.1). So interlock seemed to work around 18:30.

Around 18:30, I rebooted the front-end computers as follows.
- 18:25 k1als0 was rebooted.
- 18:29 k1ioo0 was rebooted.
- 18:35 k1ioo1 was rebooted.

On Tuesday, interlock worked before we started the DGS maintenance. So the DGS works were not related to the interlock. If the cause of working interlock in this time is same as one on Tuesday, today's my work should not be related to the interlock behavior. But if it's different and the interlock has an electrical connection with the PSL table, rebooting digital system around REFL may be able to affect the interlock behavior.


BTW, PT100 on the PSL table shows strange values on the SummaryPages. But there is no such strange behavior on the ndscope via k1nds0 (see the bottom panel of Fig.1). I'm not sure which NDS server is used to read past data on SummaryPages. Anyway, SummaryPages data in this evening is not so reliable.

Images attached to this comment
LAS (bKAGRA laser)
shinji.miyoki - 20:25 Friday 03 May 2024 (29388) Print this report
Laser down again

YamaT-san reported the laser down again around 19:00. 

I checked the chiller for the FB laser, and I confirmed that it was working. So, we suspected the same interlock trouble as the previous down. In addition, I suspected that,

  • the contact trouble of the thermometer on the water cooling type beam dumper.
  • interlock switch troubles
  • less water condition for the cooling water circulator for the beam dumper.

According to Yuzu-san's information, the laser seemed to be healthy around 18:00 (JST).

Around 15:00??, YamaT-kun performed all DAQ restart.

Some electrical influence from the DGS system and the interlock system through the grounding might exist. However, in the previous down case, the laser down happened before the DGS maintenance activities.

Anyway, I asked Tanaka-kun, YamaT-san and Yuzu-san to enter the PSL room tomorrow, and to check the above 3 items, and to restart the laser again.

 

Comments to this report:
takahiro.yamamoto - 21:57 Friday 03 May 2024 (29389) Print this report
According to the laser power monitor (K1:LAS-POW_FIB_DC_INMON) as shown in the top panel of Fig.1,
- [T1 cursol] a glitch occurs at 17:40 (maybe caused by restarting real-time model)
- [crosshair] data became strange at 17:42 (maybe caused by stoppting DAQ stream)
- [t=0] power goes to 0W at 18:30.

After 17:42, past data is not reliable because DAQ stream was stopped. But PMC guardian which read EPICS record directly went to FAULT around 18:30 (see also the middle panel of Fig.1). So interlock seemed to work around 18:30.

Around 18:30, I rebooted the front-end computers as follows.
- 18:25 k1als0 was rebooted.
- 18:29 k1ioo0 was rebooted.
- 18:35 k1ioo1 was rebooted.

On Tuesday, interlock worked before we started the DGS maintenance. So the DGS works were not related to the interlock. If the cause of working interlock in this time is same as one on Tuesday, today's my work should not be related to the interlock behavior. But if it's different and the interlock has an electrical connection with the PSL table, rebooting digital system around REFL may be able to affect the interlock behavior.


BTW, PT100 on the PSL table shows strange values on the SummaryPages. But there is no such strange behavior on the ndscope via k1nds0 (see the bottom panel of Fig.1). I'm not sure which NDS server is used to read past data on SummaryPages. Anyway, SummaryPages data in this evening is not so reliable.

Images attached to this comment
DetChar (General)
shoichi.oshino - 17:04 Friday 03 May 2024 (29387) Print this report
Update navigation bar on SummaryPage
The navigation bar of top page on SummaryPage of LIGO is updated.
The links to O3 and O4 are in drop-down format.
I made the same format for KAGRA's SummaryPage.
MIF (Noise Budget)
kentaro.komori - 15:21 Friday 03 May 2024 (29386) Print this report
Measurement of REFL PD sensing noise with and without light

I measured noise spectra of the three error signals (REFL 45, 56, and 17) directly reflected from PRM to estimate the sensing noise limit of these channels, particularly that of CARM.
Ideally, the sensing noise is completely limited by the shot noise.

The attached figure shows the result with some light.
In addition, it is compared to that without light to measure the electrical dark noise.

At high frequencies, the dark noise remains a dominant factor, which should be the shot noise.
We need to increase the input power to the PDs.
There are large excesses at low frequencies compared to the dark noise.
It might be because the optics inside the cahmber around IFI are still in air, so I will perform the same measurement after vacuum pumping.
Another potential reason is the amplitude modulation caused by detuning of the RF sidebands.
I will check the spectra after the fine tuning again, as described in klog:29385.

Furthermore, even-number harmonics of 60 Hz are prominent in the spectra with light.
Since there are no harmonics in the dark noise measurement, it should be generated by the IMC control or unknown electrical coupling peculiar to the configuration with light.

After vacuum pumping, I will calculate the CARM sensing noise limit in the unit of Hz/Hz and the projection on the DARM sensitivity.

Images attached to this report
IOO (IMC)
kentaro.komori - 13:21 Friday 03 May 2024 (29385) Print this report
Fine tuning of LSC offset and sideband frequency

[Tanaka, Hirose, Komori]

Abstract:

We adjusted the common offset immediately after the input of the IMC LSC common mode servo (-5.0 ± 0.3 mV with 14 dB input gain) and the fundamental frequency to generate multiple RF sidebands (5.6243667(1) MHz) to keep the carrier and the RF sidebands just on resonance of IMC.
As the result, we achieved a precise estimation of the IMC length, accurate to 1 µm, L_IMC = 53.302438(1) m.

Details:

During the previous measurement in klog:29084, residual peaks at the modulation frequency of 1.023 kHz persisted in either I-phase or Q-phase demodulation signals even after tuning the RF sideband frequency.
We attribute this to an extra offset in the IMC length control, causing detuning of the carrier and either the upper or lower sideband.
To address this, we experimented with combinations of LSC offset tuning and sideband frequency adjustments.

The setup is the same as klog:29084.
Initially, we adjusted the sideband frequency to equalize the residual peak heights in both demodulation signals.
Subsequently, we fine-tuned the common offset of the IMC LSC common mode servo.
This method effectively reduced the height of both peaks simultaneously, although we have not understood yet an unexpected swap of the I-phase and Q-phase peaks with minor adjustments to the sideband frequency (approximately 10 Hz).
After several iterations of this procedure, the residual peaks nearly vanished (the red and blue lines), yielding the results described above.

However, one hour later, upon rechecking the peak height, we observed the reappearance of residual peaks (the magenta and cyan lines).
We were unable to eliminate these peaks by solely adjusting the sideband frequency and common offset, suggesting that both parameters had drifted during this hour.
The drift in IMC length may originate from the re-locking of the IMC, leading to differences in the locked point by a few um, and thermal expansion of the IMC mirrors and suspensions.
The offset drift may arise from electrical circuit and residual amplitude modulation due to slight mismatches in the input polarization to the EOM, caused by temperature drifts.
We must consider strategies to compensate for these drifts and assess their impact on interferometer sensitivity.

Images attached to this report
MIF (General)
tomotada.akutsu - 18:00 Thursday 02 May 2024 (29384) Print this report
Comment to Finalization of IFI-IMM-PRM: Day 7-2: IMM chamber (29343)

After the today's acceptance check for anticipated closing IFI-IMM-PRM next week, here uploaded some photos to show the ISS beam is passing through about the center of the relevant viewport window; the photos taken the last week.

Images attached to this comment
PEM (Center)
tatsuki.washimi - 15:54 Thursday 02 May 2024 (29383) Print this report
Comment to IFI Sack hammering (29357)

[Komori, Tanaka, YokozaWashimi]

We performed the Hammering test for the IFI stack (+X, +Y side stack).

  • optical table -> optical table
  • top stack -> optical table
  • middle stack -> optical table
  • bottom stack -> optical table
  • base plate -> optical table
  • ground -> optical table
  • nothing -> optical table (as a reference)
  • base plate -> base plate
  • ground (leg) -> base plate
  • ground (leg) -> ground
  • ground (concrete) -> ground
Images attached to this comment
MIF (ASC)
hirose.chiaki - 14:10 Thursday 02 May 2024 (29381) Print this report
Comment to Cabling for WFSf3 (29281)

I just noticed that I made a mistake in the number of outputs of the Whitining Filter in the mini-rack. The number of Whitining Filter outputs on the mini-rack is 8, not 4.
kTanaka-san just checked the empty input ports of ADC in IOO0, IOO1 rack.
The following are the empty input ports.

  • IOO0 rack, ADC0: 8-19ch (FIG1)
  • IOO0 rack, ADC1: 24-31ch(FIG2)
  • IOO0 rack, ADC2: 4-11ch 20-27ch (FIG3, FIG4)
  • IOO0 rack, ADC3: no the empty input port(FIG5)
  • IOO1 rack: no the empty input port(FIG6)

For the mini-rack, I would like to use 4-11ch and 20-27ch in ADC2 and additionally 8-15ch in ADC0 and 24-31ch in ADC1.
I will check with others to confirm that we can use this port as an addition. Also, do additional cabling.

Images attached to this comment
MIF (General)
kenta.tanaka - 13:48 Thursday 02 May 2024 (29380) Print this report
Beam position check on HP beam dump before closing IFI-PRM area

 Komori, Tanaka

We confirmed that the beam positon of the reflection from PRM which is in the "MISALIGNED_BF" state seems to be almost the center of the HP beam dump. (see attached movie) 

## what we did

  • When IMMT1 and 2 were the ALIGNED state and PRM was the MISALIGNED_BF state, we locked IMC with ASC.
  • Then we stopped the ASC by inputting 0 in the IMC ASC input gain.
  • We confirmed the beam position on the HP beam dump and found it is almost the center of the dump.  

 

 
Non-image files attached to this report
VIS (BS)
ryutaro.takahashi - 13:38 Thursday 02 May 2024 (29379) Print this report
Comment to Visual inspection of BS payload (29156)

[Ikeda, Takahashi]

We checked the OSEMs again. We took pictures of the OSEMs with a 360º camera (THETA). The flap for the OSEM#1, #4, and #5 were rotated by 40~50°.

Images attached to this comment
VIS (PR3)
naoatsu.hirata - 13:14 Thursday 02 May 2024 (29378) Print this report
Comment to PR3 recovery work (28509)

[Hirata, Dan Chen-san]

We recovered PR3 suspension. Oplev position is around center, and IM V1 OSEM value (K1:VIS-PR3_IM_OSEMINF_V1_INMON) is about 6200.

  1. Firstly we moved IM pitch and yaw by IM Pico, and recovered Oplev position to 0. We moved pitch -23000, yaw -700.
  2. IM V1 OSEM was still around 9900. So we started to adjust IM V1 OSEM.
  3. Locked suspended breadboard.
  4. Locked IRM.
  5. Adjusted IM V1 OSEM. 
  6. Released IRM, and confirmed IM V1 OSEM value(K1:VIS-PR3_IM_OSEMINF_V1_INMON). It is about 6200.
  7. Checked that Oplev is around center.
  8. Released suspended breadboard.
  9. Closed chamber, and confirmed Guardian can go to "aligned" state(Pic1). After that, we changed it to "Safe".
Images attached to this comment
VAC (IFI)
shinji.miyoki - 12:04 Thursday 02 May 2024 (29377) Print this report
Inspection of the surface of the flange with the copper plate at IFI

posted by Miyoki instead of Kimura-san for the past activities on 24th April.

The gasket surface of the removed IFI flange was visually inspected. Based on the visual inspection results, the following three points are estimated to have caused this flange to leak.
1. scratches on the gasket surface of the flat flange with copper flange (thin vertical scratches can be observed when shining a light on it) One location
2. Scratches on the metal gasket surface (thin vertical scratches can be observed when illuminated by light) One scratch
Possibly traces of 1.
Uneven traces on the gasket seal surface If the gasket is tightened properly, the trace will be a circle of uniform width. Since the traces on the actual product are narrow and wide, there is a high possibility that the initial tightening of the claw clamps was not uniform.

Here are the countermeasures.
Since the instructions call for the flat flange with copper flange to be returned to the original mirror plate instead of using the copper flange, repair of the scratches on the flat flange gasket surface will not be performed. Instead, a visual inspection of the original mirror plate gasket surface is performed before installation. In addition, the gasket will be changed from a metal gasket to an elastomer gasket.

DGS (General)
takahiro.yamamoto - 11:20 Thursday 02 May 2024 (29376) Print this report
Comment to Taking backup of the core system of DGS (29361)
The NFS region were unavailable on some dgs workstations
=> I re-mounted it on each WS. It's now available.

awgtpman problem
=> I launched awgtpman as the only-tpman (without awg) mode as a temporary solution.
Test points are now available and excitations are unavailable.
Full-scale investigation and restoration works will be done tomorrow in order to avoid conflicts with today's works.
CRY (Cryo-payload EY)
takafumi.ushiba - 8:45 Thursday 02 May 2024 (29375) Print this report
Comment to ETMY photosensor recovery (28811)

We connected the new cables to the photosensors and confirmed that the sensor is working.
Also, we checked the signals while touching the cables around BF but there seems no glitch.
So, new cables seems working well.

We will tie up the cables and fix them onto the payload.

DGS (General)
takahiro.yamamoto - 2:16 Thursday 02 May 2024 (29374) Print this report
Comment to Taking backup of the core system of DGS (29361)
I found that if awgtpman was in an only tpman mode or an only awg mode, it would not terminate. This means there is no problem on the function of awg and tp. Because awggui requires both test point and excitation channels, I couldn't check the excitation works well in the only awg mode. At least test point is available in the only tpman mode and I have been able to see test points by ndscope, diaggui etc. But awgtpman process can run only one process per real-time models. So we must find the way to launch awgtpman in awg+tpman mode.

According to log analysis of awgtpman when it was launched, a timing of when it exits appears to be random. Since it shouldn't take so long time to launch awgtpman, it is possible that a timeout due to NFS performance (reading configuration files and channel list) is occurring. So I plan to reboot DAQ and real-time front-ends again after restoring NFS settings.
VIS (PRM)
naoatsu.hirata - 0:34 Thursday 02 May 2024 (29366) Print this report
Comment to Recovery of PRM suspension (28380)

[Hirata, Dan Chen-san]

We took photos for PRM payload earthquake stops. I uploaded today's photo to KAGRA dropbox.

During this work, we found two earthquake stops for test mass AR side were very close to the mirror surface(within 1mm?). We talked with Takahashi-san and Ushiba-san, and decided to withdraw them.

VIS (EX)
ryutaro.takahashi - 21:50 Wednesday 01 May 2024 (29373) Print this report
Comment to F0Y stepper motor of ETMX somehow stopped moving (29182)

I checked the motion of the F0Y FR. I operated the stepper motor from -4563 step by 1000 steps. The BF Y signal didn't change until -16563 step. When I added one more -1000 step, the signal was changed from -770 to -850. Though the signal went back to -770 by 1000 steps, it didn't go in the plus direction anymore. The motor is working, but the motion of the wire receptacle on the bearing is not smooth due to the large friction.

Images attached to this comment
CRY (Cryo-payload EY)
ryutaro.takahashi - 20:38 Wednesday 01 May 2024 (29372) Print this report
Comment to ETMY photosensor recovery (28811)

[Ushiba, Tamaki, Komori, Takahashi]

We continued the photosensor recovery. The replaced cables were fixed onto the bottom of BF, the cable anchor (small hexagon), and the suspension rod with cable ties.

Images attached to this comment
DGS (General)
takahiro.yamamoto - 18:41 Wednesday 01 May 2024 (29370) Print this report
Comment to Taking backup of the core system of DGS (29361)
We continued to the maintenance work today.
DAQ servers and real-time models were came back with the backup disk.
But awgtpman couldn't be launched on almost all models in some reason.
So only DQ channels are available and TP and EXC channels are unavailable now.

I have no good idea to solve this problem now.
I'll try to make a plan what we should do until tomorrow...

PEM (Center)
tatsuki.washimi - 17:59 Wednesday 01 May 2024 (29368) Print this report
Comment to OMC vibration study (29020)

sorry, the legend was clipped.

It is same for all plots.

Images attached to this comment
PEM (Center)
tatsuki.washimi - 17:56 Wednesday 01 May 2024 (29367) Print this report
Comment to OMC vibration study (29020)

I plotted the high-resolution (3-hour data, 128s FFT) ASDs and Coherences for the geophones and ACCs on the OMC chamber, for the night time before vacuum breaking.

Images attached to this comment
LAS (bKAGRA laser)
kenta.tanaka - 17:22 Wednesday 01 May 2024 (29365) Print this report
Recovery of Fiber amp. laser

Miyakawa, Tanaka

### abstract

We found that the interlock that is sensing the temp on the beam dumper is activated. We reset the interlock according to the Uchiyama-san's info. Then the original fiber amp. started emitting laser. We don't know why the interlock was activated yesterday.

### What we did

  • First, we turned off the fiber amp. and the seed laser and restart them, the situation did not changed. Second, we turned off them and disconnected the USB cable between the amp. and the control PC, then we restored them. However the situation did not changed. Furthermore, we turned off the amp. and the seed laser, disconnected the USB cable and turned off the PC. but the situation did not changed. At last, we turned off the amp., the seed laser and the PC, disconnected the USB cable and the power cable of the amp. We found the thermometer in the fiber amp. controller showed around 3 degrees even though the power cable was disconnected from the fiber amp. 
  • Then, we found the amp. and the PC were connected by the USB cable -> the LAN cable -> the other USB cable. Mio-sensei said the amp. was controlled by supplying the power from the USB cable. So we suspected the PC could not supply enough power due to the LAN cable. So we tried to connected the PC and the amp. with only the USB cable directly. However, the situation did not changed.
  • We brought the other fiber amp. into the PSL room. We connected the power cable, the USB cable and the interlock cable to the second fiber amp. in order to check the thermometer value when the amp. was turned on. In this time, we did not disconnected the tube from the water chiller and the input/output optical fibers from the original fiber amp.. We turned on the second fiber amp. with no emission and connected the controller app, then we found the thermometer of the second amp. became 3 degrees.  Moreover, we found the thermometer value was not changed even though we disconnected the power cable. It is the same behaviour as the original amp.. So we suspected something is wrong in the common condition with the orignal setup. 
  • The common parts we used in the first and second amplifiers were the USB, power and interlock cables; replacing the USB and power did not change the situation, so we suspected the remaining interlocks. After checking with Uchiyama-san, we learnt that the interlock is triggered by activation of one of the emergency stop buttons inside and outside the PSL and the temperature monitor on the beam dump that receives the reflected light from the PMC. We checked that the two emergency stop buttons were not activated and finally checked the temperature monitor and found that the indicator marked "ALM" was glowing red. According to Uchiyama-san, this was due to an interlock by the temperature monitor, so we tried to release this interlock. We then reconnected all the cables to the original fibre amplifier, switched it on and connected it to the controller app, and the thermometer readings now showed around 20 degrees. Finally, we succeeded in obtaining a laser output of 1 W at zero applied current. Unfortunately, we do not know at this point why the interlock was triggered yesterday. For now, we have returned the applied current value to the LD to its original value of 27.8 A.
MIR (PR3)
naoatsu.hirata - 16:33 Wednesday 01 May 2024 (29364) Print this report
Comment to First contact PR3 (29142)

[Hirata, Dan Chen-san]

We partially applied FC on PR3 HR side upper edge part. 15 min later, we pilled off FC. The target edge shape residues were successfully removed. We can see new two small dots, but they are far from the center.

  1. Locked suspended breadboard
  2. Locked suspension(IM→RM→TM)
  3. Took surface photo
  4. FC apply
  5. 15min later pilled off FC
  6. Took surface photos
  7. released suspension (TM→RM→IM)
  8. released suspended breadboard

I uploaded today's photo to KAGRA dropbox.

Tomorrow morning, we will recover suspension.

LAS (bKAGRA laser)
kenta.tanaka - 23:30 Tuesday 30 April 2024 (29363) Print this report
Laser has stopped suddenly

Yamamoto, Tanaka

After today's maintenance work we noticed that the laser output was at zero. We checked how long it had been zero and found that the laser output had dropped to zero all at once at around 10:50 today. According to Yamamoto-san, this was before the maintenance work had been carried out.

Later, when we looked at the laser controller screen via the webcam in the PSL, we saw the error message "temperature too High, Master Fault" (Fig, 1) The temperature displayed on the controller was 52°C, which was much higher than the nominal value of 23°C.

When we entered the mine, we first checked whether the chiller was working properly, and the chiller temperature display showed a nominal value of 19 degrees Celsius, which seemed to be working properly.

We then entered the PSL and touched the enclosure of the fibre amplifier and did not feel any heat in the enclosure. I also held up a sensor card around the laser's output port and found that a very weak light was emitted. The "seed" value on the controller was about 1.4. According to Nakano-san's manual, this "seed" value indicates the incident power to the fibre amplifier and should be between 1.0 and 2.5 as a nominal value. In other words, the incident power is considered to be normal. The current value of the NPRO controller was about 0.96 A, which is the nominal value, so the NPRO controller is probably normal.

According to Nakano-san's manual, in this case it was most likely a thermometer malfunction, which was fixed by switching off the laser, unplugging the connection cable from the amplifier to the PC and restarting the application, so we thought we would try this approach. However, at the beginning of the manual, there is a warning that if the amplifier is switched on while the seed laser is off, it may break down, while the operating procedure states that the power should be switched on from the amplifier. This contradicted the warning, and as we could not decide which was correct, we decided not to power down the whole laser today (although we are sure the warning is correct).

Instead, we first switched off the fibre amplifier only and restarted it. However, the situation remained the same. We then switched off the fibre amplifier, closed the app, and then restarted the amplifier and the app. However, the situation remained the same.

At this point, we noticed that before pressing the "ENABLE" button on the controller app, i.e. when the laser was not emitting, the thermometer on the controller was showing around 3.5°C, and when we pressed the "ENABLE " button was pressed, the value of the thermometer rose up to 52 degrees Celsius drastically. When the "STOP" button was pressed, the thermometer value suddenly dropped from 52°C to about 3.5°C. This seems to be an odd behaviour of the thermometer.

At any rate, as it was late, we finished today's investigation.

Tomorrow, we will try to restart the whole laser system including seed laser.
 

Images attached to this report
Search Help
×

Warning

×