Reports of 27058
VIS (PRM)
naoatsu.hirata - 0:34 Thursday 02 May 2024 (29366) Print this report
Comment to Recovery of PRM suspension (28380)

[Hirata, Dan Chen-san]

We took photos for PRM payload earthquake stops. I uploaded today's photo to KAGRA dropbox.

During this work, we found two earthquake stops for test mass AR side were very close to the mirror surface(within 1mm?). We talked with Takahashi-san and Ushiba-san, and decided to withdraw them.

VIS (EX)
ryutaro.takahashi - 21:50 Wednesday 01 May 2024 (29373) Print this report
Comment to F0Y stepper motor of ETMX somehow stopped moving (29182)

I checked the motion of the F0Y FR. I operated the stepper motor from -4563 step by 1000 steps. The BF Y signal didn't change until -16563 step. When I added one more -1000 step, the signal was changed from -770 to -850. Though the signal went back to -770 by 1000 steps, it didn't go in the plus direction anymore. The motor is working, but the motion of the wire receptacle on the bearing is not smooth due to the large friction.

Images attached to this comment
CRY (Cryo-payload EY)
ryutaro.takahashi - 20:38 Wednesday 01 May 2024 (29372) Print this report
Comment to ETMY photosensor recovery (28811)

[Ushiba, Tamaki, Komori, Takahashi]

We continued the photosensor recovery. The replaced cables were fixed onto the bottom of BF, the cable anchor (small hexagon), and the suspension rod with cable ties.

Images attached to this comment
DGS (General)
takahiro.yamamoto - 18:41 Wednesday 01 May 2024 (29370) Print this report
Comment to Taking backup of the core system of DGS (29361)
We continued to the maintenance work today.
DAQ servers and real-time models were came back with the backup disk.
But awgtpman couldn't be launched on almost all models in some reason.
So only DQ channels are available and TP and EXC channels are unavailable now.

I have no good idea to solve this problem now.
I'll try to make a plan what we should do until tomorrow...

PEM (Center)
tatsuki.washimi - 17:59 Wednesday 01 May 2024 (29368) Print this report
Comment to OMC vibration study (29020)

sorry, the legend was clipped.

It is same for all plots.

Images attached to this comment
PEM (Center)
tatsuki.washimi - 17:56 Wednesday 01 May 2024 (29367) Print this report
Comment to OMC vibration study (29020)

I plotted the high-resolution (3-hour data, 128s FFT) ASDs and Coherences for the geophones and ACCs on the OMC chamber, for the night time before vacuum breaking.

Images attached to this comment
LAS (bKAGRA laser)
kenta.tanaka - 17:22 Wednesday 01 May 2024 (29365) Print this report
Recovery of Fiber amp. laser

Miyakawa, Tanaka

### abstract

We found that the interlock that is sensing the temp on the beam dumper is activated. We reset the interlock according to the Uchiyama-san's info. Then the original fiber amp. started emitting laser. We don't know why the interlock was activated yesterday.

### What we did

  • First, we turned off the fiber amp. and the seed laser and restart them, the situation did not changed. Second, we turned off them and disconnected the USB cable between the amp. and the control PC, then we restored them. However the situation did not changed. Furthermore, we turned off the amp. and the seed laser, disconnected the USB cable and turned off the PC. but the situation did not changed. At last, we turned off the amp., the seed laser and the PC, disconnected the USB cable and the power cable of the amp. We found the thermometer in the fiber amp. controller showed around 3 degrees even though the power cable was disconnected from the fiber amp. 
  • Then, we found the amp. and the PC were connected by the USB cable -> the LAN cable -> the other USB cable. Mio-sensei said the amp. was controlled by supplying the power from the USB cable. So we suspected the PC could not supply enough power due to the LAN cable. So we tried to connected the PC and the amp. with only the USB cable directly. However, the situation did not changed.
  • We brought the other fiber amp. into the PSL room. We connected the power cable, the USB cable and the interlock cable to the second fiber amp. in order to check the thermometer value when the amp. was turned on. In this time, we did not disconnected the tube from the water chiller and the input/output optical fibers from the original fiber amp.. We turned on the second fiber amp. with no emission and connected the controller app, then we found the thermometer of the second amp. became 3 degrees.  Moreover, we found the thermometer value was not changed even though we disconnected the power cable. It is the same behaviour as the original amp.. So we suspected something is wrong in the common condition with the orignal setup. 
  • The common parts we used in the first and second amplifiers were the USB, power and interlock cables; replacing the USB and power did not change the situation, so we suspected the remaining interlocks. After checking with Uchiyama-san, we learnt that the interlock is triggered by activation of one of the emergency stop buttons inside and outside the PSL and the temperature monitor on the beam dump that receives the reflected light from the PMC. We checked that the two emergency stop buttons were not activated and finally checked the temperature monitor and found that the indicator marked "ALM" was glowing red. According to Uchiyama-san, this was due to an interlock by the temperature monitor, so we tried to release this interlock. We then reconnected all the cables to the original fibre amplifier, switched it on and connected it to the controller app, and the thermometer readings now showed around 20 degrees. Finally, we succeeded in obtaining a laser output of 1 W at zero applied current. Unfortunately, we do not know at this point why the interlock was triggered yesterday. For now, we have returned the applied current value to the LD to its original value of 27.8 A.
MIR (PR3)
naoatsu.hirata - 16:33 Wednesday 01 May 2024 (29364) Print this report
Comment to First contact PR3 (29142)

[Hirata, Dan Chen-san]

We partially applied FC on PR3 HR side upper edge part. 15 min later, we pilled off FC. The target edge shape residues were successfully removed. We can see new two small dots, but they are far from the center.

  1. Locked suspended breadboard
  2. Locked suspension(IM→RM→TM)
  3. Took surface photo
  4. FC apply
  5. 15min later pilled off FC
  6. Took surface photos
  7. released suspension (TM→RM→IM)
  8. released suspended breadboard

I uploaded today's photo to KAGRA dropbox.

Tomorrow morning, we will recover suspension.

LAS (bKAGRA laser)
kenta.tanaka - 23:30 Tuesday 30 April 2024 (29363) Print this report
Laser has stopped suddenly

Yamamoto, Tanaka

After today's maintenance work we noticed that the laser output was at zero. We checked how long it had been zero and found that the laser output had dropped to zero all at once at around 10:50 today. According to Yamamoto-san, this was before the maintenance work had been carried out.

Later, when we looked at the laser controller screen via the webcam in the PSL, we saw the error message "temperature too High, Master Fault" (Fig, 1) The temperature displayed on the controller was 52°C, which was much higher than the nominal value of 23°C.

When we entered the mine, we first checked whether the chiller was working properly, and the chiller temperature display showed a nominal value of 19 degrees Celsius, which seemed to be working properly.

We then entered the PSL and touched the enclosure of the fibre amplifier and did not feel any heat in the enclosure. I also held up a sensor card around the laser's output port and found that a very weak light was emitted. The "seed" value on the controller was about 1.4. According to Nakano-san's manual, this "seed" value indicates the incident power to the fibre amplifier and should be between 1.0 and 2.5 as a nominal value. In other words, the incident power is considered to be normal. The current value of the NPRO controller was about 0.96 A, which is the nominal value, so the NPRO controller is probably normal.

According to Nakano-san's manual, in this case it was most likely a thermometer malfunction, which was fixed by switching off the laser, unplugging the connection cable from the amplifier to the PC and restarting the application, so we thought we would try this approach. However, at the beginning of the manual, there is a warning that if the amplifier is switched on while the seed laser is off, it may break down, while the operating procedure states that the power should be switched on from the amplifier. This contradicted the warning, and as we could not decide which was correct, we decided not to power down the whole laser today (although we are sure the warning is correct).

Instead, we first switched off the fibre amplifier only and restarted it. However, the situation remained the same. We then switched off the fibre amplifier, closed the app, and then restarted the amplifier and the app. However, the situation remained the same.

At this point, we noticed that before pressing the "ENABLE" button on the controller app, i.e. when the laser was not emitting, the thermometer on the controller was showing around 3.5°C, and when we pressed the "ENABLE " button was pressed, the value of the thermometer rose up to 52 degrees Celsius drastically. When the "STOP" button was pressed, the thermometer value suddenly dropped from 52°C to about 3.5°C. This seems to be an odd behaviour of the thermometer.

At any rate, as it was late, we finished today's investigation.

Tomorrow, we will try to restart the whole laser system including seed laser.
 

Images attached to this report
DGS (General)
takahiro.yamamoto - 22:15 Tuesday 30 April 2024 (29361) Print this report
Taking backup of the core system of DGS
[Ikeda, Nakagaki, YamaT]

Abstract

We took backups of the system region of DAQ servers and k1boot.
A backup process for the data region of k1boot will be completed early tomorrow morning.
Don't make any changes in /opt/rtcds tonight.

After taking backup, we will reboot all DAQ servers and real-time front-ends tomorrow.

Details

Back up of DAQ servers
We took a system backup of the DAQ servers for the first time in a year. All DAQ servers (k1dc0, k1fw0, k1fw1, k1nds0, k1nds1, k1tw0, k1tw1, and k1bcst0) came back online without any troubles after taking backup. Now all servers run with new HDDs in which all files are copied from the HDD used until today's morning. Old HDDs used until today's morning are kept in the HDD slot of each server as a latest back up with tagging as back up of 2024.04.30. Backup of one-generation ago which has been taken in the last spring (Feb.-Apr. 2023) is also kept in the server slot. Backup of two- (or more) generation ago was brought back to Mozumi.

Back up of system region of k1boot
Though we tried to take a backup of the system disk of k1boot, HDD couldn't be copied due to a disk error. This HDD was newly prepared in last November. So a disk error was occurred only in recent 6 months. This system couldn't be used for the boot disk. But it could be mounted by another system. So we mounted this broken HDD from another backup system taken in last November and copied all changes in recent 6 months (/etc/init.d/mx_stream and /diskless/root/etc/rtsystab). After then, we copied current system files to two new HDDs. One of them are now being used for current k1boot system and another one is kept as a latest backup tagged as 2024.04.30. Because broken disk can be still mounted as data region, it's also kept just in case.

Back up of data region of k1boot
Backup process of data region also failed. And also, it's difficult to salvage all changes in recent 6 months (all model updates, filter updates, medm updates, etc.). So we are now trying to copy files one-by-one by rsync process. This process probably completed tomorrow early morning (2am or 3am? It's difficult to estimate accurate time because a copy speed is not so stable.). So please don't make any changes on NFS region tonight. Changes in tonight will vanish. We cannot ensure to salvage them. Remaining work will be done tomorrow. After taking all backups, DAQ servers and all real-time front-end will be rebooted.
Comments to this report:
takahiro.yamamoto - 18:41 Wednesday 01 May 2024 (29370) Print this report
We continued to the maintenance work today.
DAQ servers and real-time models were came back with the backup disk.
But awgtpman couldn't be launched on almost all models in some reason.
So only DQ channels are available and TP and EXC channels are unavailable now.

I have no good idea to solve this problem now.
I'll try to make a plan what we should do until tomorrow...

CRY (Cryo-payload EY)
takafumi.ushiba - 18:41 Tuesday 30 April 2024 (29360) Print this report
Comment to ETMY photosensor recovery (28811)

[Dan, Hirata, rTakahashi, Tamaki, Ushiba]

We installed three additional cables between BF and PF (B49, B50, and B51).
Then, we replaced Dsub connectors on BF.
Followings are the cable numbers before and after replacement,

Sensor name cable number before replacement cable number after replacement
H2 (CRYO10) B44 B49
H3 (CRYO11) B45 B50
V3 (CRYO12) B46 B51

Old cables are cut and clamped on the BF.
We will continue the cabling at PF stage tomorrow.

Note:

During the work, we removed radiation shield plates between BF and PF.
They are stored in the cryostt, and need to be reinstalled after finishing the cabling.

VAC (EYA)
takashi.uchiyama - 14:24 Tuesday 30 April 2024 (29359) Print this report
Comment to Pool of liquid in EYA chamber (29214)
2024/04/30

Uchiyama

I took some photos under the optical table in EYA with a 360-degree camera.

I found that almost all the bottom surfaces of the chamber and the optical table are contaminated.
The support legs of the optical table also are contaminated.
Images attached to this comment
MIF (General)
takaaki.yokozawa - 12:04 Tuesday 30 April 2024 (29358) Print this report
Connect the beam duct between BS and SR3
I noticed GRY beam cannot inject to Yarm, I checked the status of beam duct and found the closing by aluminium.
I called Uchiyama-san and he agreed to connect them.
Images attached to this report
PEM (Center)
tatsuki.washimi - 11:07 Tuesday 30 April 2024 (29357) Print this report
IFI Sack hammering

[YokozaWashimi]

We moved the 3-axial accelerometer and the impact hammer from the OMC area to the IFI area.

MIF (ASC)
hirose.chiaki - 10:00 Tuesday 30 April 2024 (29355) Print this report
Checked the current value when the mini-rack's circuits ON.

[Tomura, Kamiizumi, Hirose]   This work was done on Friday, 26/04/2024.

Continued from klog29281.

We connected the power cables of the circuits in the mini-rack to the power strips located on the mini-rack.
These power strips are connected to the 18 V and 24 V power strips in the IOO0 rack. Then, the following process was done to check the current value of the stable power supply in the computer room.

  1. First, we made sure that the stable power supply was turned on and then we increased the limit value to the maximum. Specifically, turned the knob "CURRENT" to the right to increase the limit current amount to the maximum. (The "CURRENT" knob is 3 A per 1 turn. To raise it to a maximum of 30 A, it is advisable to find the required number of revolutions from the current limit value marked on the memo, turn it by that amount and check that the knob is at its maximum and no longer stuck).
  2. To check that the circuits are connected from the stable power supply, turn on each circuit and check that the 'DC AMPERES' of the stable power supply rises.
    However, because the amount of current used by the 24V is very low, the 'DC AMPERES' of the 24V stable power supply could not be visually observed to rise.
  3. Switch on everything, check the current value and set the limit values. The current value of the ± 24 V remained almost unchanged, so the limit values also remained the same as before. 
      The current value The limit value
    +18V 24A 27A
    -18V 14A 18A

    18 V may exceed 25 A depending on circuit usage. If 25 A is permanently exceeded, the power supply should be re-examined. I would like to proceed with this for now.

VIS (EX)
takafumi.ushiba - 9:55 Tuesday 30 April 2024 (29356) Print this report
Comment to ETMX PAY was tripped (29351)

I checked the reason why ETMX was oscillated.
Figure 1 and 2 show the signals of NB filters and MN DAMP filters, respectively.

DOF5 and MN_DAMP_L was oscillated at 5.1 Hz, which should be damped by DOF5.
So, it is very likely that the reason of the oscillation is DOF5 NB filter.
This filter was optimized when ETMX is at 90 K, so it is neccesary to optimize it again at the current temperature.

Images attached to this comment
CAL (XPcal)
dan.chen - 7:29 Tuesday 30 April 2024 (29354) Print this report
Comment to Pcal-X lignment check (29352)

I requested SAFE state and checked the SDF after this work and klog29353.
Also, I turned the Pcal-X laser OFF.

CAL (Pcal general)
dan.chen - 7:17 Tuesday 30 April 2024 (29353) Print this report
Comment to Pcal guardian update around (29335)

I added loop_check and injection_check as Decorators in the CAL_PCAL guardian:

  • loop_check: make sure that the OFS loops are open.
  • injection_check: make sure that the injection SWs are OFF=open.

I added these Decorators in SAFE state and DOWN state of the guardian.

CAL (XPcal)
dan.chen - 5:47 Tuesday 30 April 2024 (29352) Print this report
Pcal-X lignment check

Date: 2024/4/30 early morning

I checked the Pcal-X beam positions on the ETMX, and no large change was found comparing to the last beam alignment works on 4/25 (klog29331).
Fig 1: picture on 4/25, fig 2: picture on 4/30=today.

Because of the ETMX suspension stuation repoted on klog29351, the suspension state was PAY_TRIPPED and I did not touch it, which means the Tcam picture can be differ a littlt.

Tcam direction changed a little?

Images attached to this report
Comments to this report:
dan.chen - 7:29 Tuesday 30 April 2024 (29354) Print this report

I requested SAFE state and checked the SDF after this work and klog29353.
Also, I turned the Pcal-X laser OFF.

VIS (EX)
takahiro.yamamoto - 19:20 Monday 29 April 2024 (29351) Print this report
ETMX PAY was tripped
ETMX PAY was tripped at 17:57 on Apr. 27th (Sat.) JST (see Fig.1).
After then, it was recovered at 11:15 on Apr. 29th (Mon.) JST but it was tripped again at 11:46 on Apr. 29th (Mon.) JST.

In both cases, a software watchdog detected the oscillation and saturation on the sensor signals of the MN stage (see also Fig.2 and Fig.3). This is just a matter of asking experts to check the control and/or sensors.

Guardian cannot escape from tripped states in automatically. So someone recovered ETMX once and ETMX tripped again. But I couldn't find any klog posts. It may be a more serious problem than a trip itself...

By the way, sound output of k1mon0 had been set as on-board speaker not a monitor connected via HDMI. Because of this, sound notification by VIS guardian couldn't be heard at all. It's my mistake on the settings of sound output when the system has been upgraded. This issue was fixed today.
Images attached to this report
Comments to this report:
takafumi.ushiba - 9:55 Tuesday 30 April 2024 (29356) Print this report

I checked the reason why ETMX was oscillated.
Figure 1 and 2 show the signals of NB filters and MN DAMP filters, respectively.

DOF5 and MN_DAMP_L was oscillated at 5.1 Hz, which should be damped by DOF5.
So, it is very likely that the reason of the oscillation is DOF5 NB filter.
This filter was optimized when ETMX is at 90 K, so it is neccesary to optimize it again at the current temperature.

Images attached to this comment
DGS (General)
shoichi.oshino - 15:57 Sunday 28 April 2024 (29350) Print this report
Preparation of DGS maintenance
I stopped backup process of opt directory on k1bck0, because it takes time lager than one day.
After finishing maintenance, I will restart this process.
VIS (PRM)
takafumi.ushiba - 16:33 Saturday 27 April 2024 (29348) Print this report
Comment to Health check of PRM (28522)

I checked all TFs of PRM.
All TF seem fine though resonant frequency of GAS filter is shifted slightly.

Following is an additional note, which is not problematic.
1. BF coil DoF measurement have a larger gain than before because of the calibration factor update (klog21311 and klog21315).

Images attached to this comment
VIS (PRM)
takafumi.ushiba - 16:26 Saturday 27 April 2024 (29349) Print this report
Check of PRM sensor spectra

I measured the spectra of PRM LVDTs and OSEMs (fig1, fig2)
All seem fine.

Images attached to this report
IOO (IMC)
takafumi.ushiba - 14:37 Saturday 27 April 2024 (29347) Print this report
Reduce IMC-MCL_SERVO filter gain

IMC LSC was often failed when holding the output of MCL feedback.
Since MCE actuator efficincy increased by a factor of 3 due to thechange of the magnet size, I added a gain of 0.3 at FM9 (gain) of IMC-MCL_SERVO filter bank.

MIF (General)
takafumi.ushiba - 14:34 Saturday 27 April 2024 (29346) Print this report
Finer alignment of XARM

I performed finer alignment of X arm with ADSs.

Followings are the procedure:

1. X arm lock with both IR and GR.
2. Engage ADSs for PR3 by using GRX PD.
3. Engage ADSs  for PR2 and IMMT2 by using IRX PD.
4. Move ITMX and ETMX so that beam spot on both mirrors are good.

Left figure of fig1 and 2 show the beam spot on ITMX before the earthquake and now, respectively.
Left figure of fig3 and 4 show the beam spot on ETMX before the earthquake and now, respectively.

After the alignment, I recoarded the good OpLev values of IMMT2, PR2, PR3, ITMX, and ETMX.

Followings are the several notes we need to check.
1. IRX beam is not hit to the X arm trans IR camera.
2. GRX beam is shifted on the X arm trans GR camera.
3. We haven't checked the beam on TMSX, so I'm not so sure ADSs work fine (at least, trans power was increased thanks to ADSs, though).
4. IRX and GRX normalized transmission is around 0.7-0.8 now. I'm not so sure this value is due to the bad alignment, bad finesse, or clipping somewhere (GV between BS-IXC, GV between EXC-TMSX, optics on TMSX, and so on).

Images attached to this report
Search Help
×

Warning

×