Reports of 27268
DetChar (General)
nami.uchikata - 16:04 Tuesday 21 May 2024 (29592) Print this report
Modification of cache files at system B
I have found that some LIGO format cache files were incomplete for the O4a period. I have modified them and also prepared O4a combined cache files.
Ordinary cache files: /home/detchar/cache/Cache_GPS
O4a combined:
LIGO format /home/detchar/cache/Cache_GPS/1368975618_1371337218_O4a.cache
Virgo format /home/detchar/cache/Cache_GPS/1368975618_1371337218_O4a.ffl
IOO (IMC)
kenta.tanaka - 13:40 Tuesday 21 May 2024 (29591) Print this report
IMC blasting resistance check with stomping

Yokozawa, Washimi, Tanaka

### Abstract

We stomped the ground around MCF and MCE chamber with locking IMC to check the resistance for the blasting. However, we could shake the ground around MCF and MCE chamber at most +/- 60~70 um/s and IMC could keep the lock during our stomping. According to IMC LSC feedback signal to the laser PZT (K1:IMC-SERVO_SLOW_DAQ_OUT_DQ), when the ground shaked +/- 60~70 um/s, the amplitude of the feedback signal became +/- 0.2 V at 1 Hz (which is the length resonance frequency of Type-C sus.). If the blasting shake the ground with the amplitude of 200 um/s, which is estimated by the construction company, the feedback signal is expected to become +/- 0.6~0.8 V. This value is in the range (+/- 5V) so IMC will be able to keep the lock during the blasting. At worst, IMC lock can restore by guardian automatically even if IMC goes down.

### What we did

  • Fig. 1 is the view of the location of the seismometer and the stomping
  • With locking IMC, firstly, we stomped the corners of the MCF ENGIRI as following order (see the movie). However, we could shake the ground with the amplitude of at most +/- 60 ~70 um/s by our stomping. This is the one third smaller than the estimated amplitude of the ground motion by blasting. And then IMC kept the lock during the stomping.
    1. -X, +Y (nearby MCi camera, we disconnected the LAN cable during our stomping. After our work, we reconnected the LAN cable.)
    2. -X, -Y (nearby MCi oplev)
    3. +X, -Y (nearby MCo oplev)
    4. +X, +Y (nearby MCF seismometer)
  • Then, we moved the MCE area and stomped the MCE ENGIRI corners with the same procedure as the MCF trial. Simirally, IMC kept the lock.
  • According to IMC LSC feedback signal to the laser PZT (K1:IMC-SERVO_SLOW_DAQ_OUT_DQ), when the ground shaked +/- 60~70 um/s, the amplitude of the feedback signal became +/- 0.2 V at 1 Hz (which is the length resonance frequency of Type-C sus.) (Fig. 2).
  • If the blasting shake the ground with the amplitude of 200 um/s, which is estimated by the construction company, the feedback signal is expected to become +/- 0.6~0.8 V. This value is in the range (+/- 5V) so IMC will be able to keep the lock during the blasting. At worst, IMC lock can restore by guardian automatically even if IMC goes down.

 

Images attached to this report
VIS (EX)
takafumi.ushiba - 12:03 Tuesday 21 May 2024 (29589) Print this report
Comment to ETMX PAY was tripped (29351)

I redesigned the MN_NBDAMP_DOF5 filter not to oscilate at LOCK_ACQUISITION state.
At 296K, ETMX forth L resonance is about 5.04 Hz while it was 5.13 at 90K.
So, I made new filter (BP5.04(296K)) at FM1 of DOF5 and moved old filter from FM1 to FM2.
After changing the filter, damping seems working well (fig1).

Figure 2 show the one hour trend of PS DAMP signals at LOCK_ACQUISITION state after changing the filter.
For one hour, no oscillation can be observed, so the control seems stable.
 

Images attached to this comment
VAC (General)
takaaki.yokozawa - 11:27 Tuesday 21 May 2024 (29590) Print this report
Open the GV between IMC and IFI chamber
I opened the GV between IMC and IFI chamber
Images attached to this report
PEM (Center)
tatsuki.washimi - 8:58 Tuesday 21 May 2024 (29588) Print this report
Comment to Shaker Injection Tests for the OMC Base Plate (29578)

Note that the TFx and TFy are the vertical -> horizontal coupling in this shaking test, but were the horizontal -> horizontal coupling in the previous hammering test. So their comparison is not fair.

MIF (General)
dan.chen - 8:26 Tuesday 21 May 2024 (29587) Print this report
Comment to Finesse measurement X arm 240521 (29586)

HWP vs IMC trans power relationship

HWP [degree] IMC trans [W]

7

1.45
8 1.72
9 2.0
10 2.3
11 2.6
12 2.9
13 3.25
14 3.6
15 3.9
16 4.25
17 4.65
18 5.0
19 5.3
20 5.7

 

MIF (General)
takaaki.yokozawa - 8:21 Tuesday 21 May 2024 (29586) Print this report
Finesse measurement X arm 240521
[Dan, Yokozawa]

We performed the Finesse measurement Xarm.

1. HWP vs IMC trans power relationship
We checked the relationship between PSL HWP and IMC trans power.
Fig.1. showed the result.
Detail would be reported by Dan-san.

2. Hand initial alignment.
When we perform IR lock, the transmittance power of TMSX IR PD was about 0.5.
And GRX flash was very weak.
Fig.2. showed the status of GRX
After performing the hand on alignment for PR3, ITMX and ETMX, the IR trans power became 0.8.

3. Finesse measurement
When I started the script of finesse measurement, IR lock loss always happened when PSL HWP changed from 15 to 16.
So I changed the target HWP value from 20 to 15

DATE : 2024/05/20 22:59 UTC
ARM : X
TEMP ITM : 296.8
TEMP ETM : 296.3
NUM : 10
IFO REFL HWP : 146.0
PSL HWP : 15.0
IMC POWER : 3.9
VALUE : 1475.3
ERROR : 16.5

Measured data is stored to /users/Commissioning/data/Finesse/Xarm/20240520-2259\
00

Images attached to this report
Comments to this report:
dan.chen - 8:26 Tuesday 21 May 2024 (29587) Print this report

HWP vs IMC trans power relationship

HWP [degree] IMC trans [W]

7

1.45
8 1.72
9 2.0
10 2.3
11 2.6
12 2.9
13 3.25
14 3.6
15 3.9
16 4.25
17 4.65
18 5.0
19 5.3
20 5.7

 

CAL (XPcal)
dan.chen - 6:18 Tuesday 21 May 2024 (29585) Print this report
Comment to How to take Tcam picture for Pcal-X beam position monitor (29459)

I took the picture 21st May. (Tuesday)

VAC (EYA)
shinji.miyoki - 23:18 Monday 20 May 2024 (29583) Print this report
EYA cleaning with cleaning water Day1

[miyoki, uchiyama, hayakawa, yoshimura, yamaguchi, omae, takahasi, sawada]

Progress

We filled the cleaning water just above the bottom surface of the optical table. We left it for one night.

Cleaning Process

  1. Water preparation
    • We prepared water filtered by cascading 25um and 5um filters from the tap water obtained at the EY area.
    • We also prepared hot water by using the electrical pot from this filtered water.
    • We once put this water in poly-tanks and mixed it with the hot water to make warm water, then moved the poly-tanks to the EYA area.
    • We made the "cleaning water" by diluting the neutral detergent liquid with 50 times amount of warm water.
  2. Absorber setting
    • Because I have set two of three thin red tubes to absorb the water between the bellows and the poles for the optical table, I set the last one for the space at +x/+y side.
  3. Bubble shield setting, etc (Fig.1, 2, 3)
    • We inserted sponge bars that were wrapped with stretch seals in the space between the optical table and the side wall of the EYA vacuum tank to reject bubbles from the cleaning water when we injected air for mixing the cleaning water.
    • We put vectra alpha cloths on the mirrors for the P-cal. We are very sorry but one of mirrors seemed to get a liquid drip on the left surface of the mirror. We tried to clean this drip by acetone and apparently, the surface seemed to be clean. However, anyway, please check and clean all mirrors after the EYA tank cleaning.
  4. Cleaning wroks
    • This cleaning water was transferred from the poly-tank to the bottom of the EYA vacuum tank by a pump.
    • During the transfer of the cleaning water, we also injected air by using an air pump to mix the cleaning water in the EYA tank. (Fig.4)
      • I tested the water absorber for the bellow-pole space, and I confirmed that water could be absorbed.
    • We stopped the cleaning water injection when the water surface was over the bottom surface of the optical table.
    • I washed the bottom surface of the optical table with a silicon brush that had a long handle. However, the most outside area of the optical table could not to be reached. I will try washing with a wiper that has a longer handle again tomorrow. (I have already bought a longer brush in DAISO)
    • Because the time was up, we left the cleaning water inside the EYA tank.
    • To avoid too much moisture inside the EYA vacuum tank, we left two side hatches to be open and one side mouse on the Y-arm axis on the EYC side was closed. So the air from the FFUs is expected to flow from the -x side to +x side.
    • (When the water surface was approaching the bottom surface of the optical table, the bubbles came out through several holes in the optical bench because these holes completely penetrated the optical bench. So we stopped the bubbling. As we expected, the sponge bars could prevent these bubbles from going on to the optical table surface.)

Tomorrow plan

  1. Wash the bottom surface of the optical bench again by using a longer brush.
  2. Drain the water inside the EYA tank. (The abandoned water should be transferred again into poy-tanks because we cannot abandon this used cleaning water in the mine water routes.)
  3. Inject the warm water again and wash the bottom surface of the optical table again.
  4. Drain the water inside the EYA tank and check inside by using the 3D camera.
  5. ...

 

Images attached to this report
VIS (SRM)
ryutaro.takahashi - 22:29 Monday 20 May 2024 (29584) Print this report
Comment to Health check of SRM (28373)

I checked the TFs measured in air. They are consistent with the reference (measurement before O4a) and look healthy.

Images attached to this comment
VIS (General)
takahiro.yamamoto - 18:07 Monday 20 May 2024 (29582) Print this report
Saturation check for GAS controls

Abstract

Because end users often continue to their works without checking the DAC saturation, a check function for saturation on GAS controls were implemented.
This function will be deployed to all VIS guardians on the next maintenance day.

Details

Some works such as a health check, a height check by TCam, Pcal alignment, etc. were often performed without checking the DAC saturation especially for GAS controls. In order to prevent such situations, I added a new check function for the saturation on the GAS controls.

This function checks COILOUTF_GAS_OUT is smaller than 25000ct or not. If a readout value is larger than the threshold on more than 1 stage, it makes notifications on MEDM and slack. I plan to apply this function for all idle states in which DC control of tower part is engaged. According to Ushiba-kun's advice, it's better to enable this check only in vacuum. So enabling/disabling this function is managed in params.py (If stability of vacuum readout becomes more reliable, it may be better to manage by using a vacuum readout value in each area. But now it's not so reliable and I didn't implement so.).

In order to deploy this function, all VIS guardian must be reloaded (or guardian node may have to be restarted?). So I'll deploy it on the next maintenance day.
VIS (IX)
lucia.trozzo - 16:58 Monday 20 May 2024 (29581) Print this report
Comment to Ineretial damping implementation: preliminary results (29481)

Over the last few days I have been trying to understand the reason for the instability of the 30 mHz blending strategy along the T direction. As already mentioned, the T TF still shows the phase lag visible in the TFs measured with the blended sensor, despite the phase compensator implemented to compensate for it. As with the EX, I first modified the phase compensator to avoid DC saturation of the ACC and GEO (see Figure 1, Figure 2, Figure 3, and Figure 4), and then measured the TFs: LVDT/IS{L,T}.

Figure 5 and Figure 6 show the TF: LVDT/IS{L,T}. It is clear that there is still a phase lag which introduces instability into the loop. To reduce the phase lag, I implemented a new phase compensator on the virtual inertial sensor. I then re-measured the TFS and it seems that it helps to reduce the phase lag and should stabilise the loop (see Figure 7, Figure 8).
Next step:
Test the stability of the loop and redesign the blend filters.

 

 

Images attached to this comment
PEM (Center)
tatsuki.washimi - 15:15 Monday 20 May 2024 (29580) Print this report
Comment to Shaker Injection Tests for the OMC Base Plate (29578)

I calculated the transfer functions from the base vibration (z) to the table vibration (x,y,z).

Comparing the results of hammering (klog29515), inconsistency is found.

The underestimation below 70Hz is solved.

Images attached to this comment
PEM (Center)
tatsuki.washimi - 13:53 Monday 20 May 2024 (29578) Print this report
Shaker Injection Tests for the OMC Base Plate

I performed the shaker injection tests for the OMC base plate, by locating a 3-axial accelerometer (TEAC710Z) on the optical table and a 1-axial accelerometer (TEAC710, for vertical) on the base plate.

Images attached to this report
Comments to this report:
tatsuki.washimi - 15:15 Monday 20 May 2024 (29580) Print this report

I calculated the transfer functions from the base vibration (z) to the table vibration (x,y,z).

Comparing the results of hammering (klog29515), inconsistency is found.

The underestimation below 70Hz is solved.

Images attached to this comment
tatsuki.washimi - 8:58 Tuesday 21 May 2024 (29588) Print this report

Note that the TFx and TFy are the vertical -> horizontal coupling in this shaking test, but were the horizontal -> horizontal coupling in the previous hammering test. So their comparison is not fair.

VIS (EY)
takafumi.ushiba - 13:08 Monday 20 May 2024 (29577) Print this report
Comment to Health check for ETMY (28430)

I performed actuator and center balancing of ETMY to confirm strange MNV TF can be better by sensor/actuator decoupling.
Figure 1 shows the MNV TF after decupling.
MNV TF becomes healthy.

Also, I measured TFs from V1 and V3 coils (fig6: V1, fig7: V3).
Since the gain of V1 becomes smaller, the gain of TF is smaller than before but it is not problematic.
Also, gain of MN V3 is now same as the reference, so the smaller gain, which was measured previously, seems due to the gain change in DGS.

So, MN stage TF seems fine now.

What I did:
1. Photosensor gain (MN_OSEMINF_{H1,H3}_GAIN) was changed to minimize the coupling between MNV and MNP (fig2:before, fig3:current).
2. Actuator balance was performed for reducing V2P coupling (fig4: before, fig5:current).
3. V2Y actuator decoupling was performed (fig6)

Images attached to this comment
VAC (PRM)
tomotada.akutsu - 12:40 Monday 20 May 2024 (29579) Print this report
Comment to Replacement of pressure gauge on PRM vacuum pumping unit (29522)

Is this actual number of the pressure inside IFI-IMM-PRM chambers?? I wonder if the GV between this CC-10 and the IFI-IMM-PRM chambers might open ot not.

VAC (PRM)
nobuhiro.kimura - 11:17 Monday 20 May 2024 (29576) Print this report
Comment to Replacement of pressure gauge on PRM vacuum pumping unit (29522)

[Kimura]
The serial communication settings on the replaced CC-10 were reset to factory settings, but the connection to the network was not restored.
Therefore, the electronic board of the CC-10 was replaced with the removed CC-10 electronic board and the sensor calibration curve was reset.
 As a result, the connection to the network was restored. (Figure 1)
The values were confirmed to be consistent with the displayed values seen by the network camera. (Photo 1)
The CC-10 with communication failure will be sent for repair.

At 11:16 a.m., CC-10 indicated 8.0 x 10^-5 Pa.

Images attached to this comment
VAC (PRM)
shinji.miyoki - 9:16 Monday 20 May 2024 (29574) Print this report
Comment to Replacement of pressure gauge on PRM vacuum pumping unit (29522)

Around 9:00, 8.2x10^-5 Pa.

CAL (XPcal)
takaaki.yokozawa - 9:42 Sunday 19 May 2024 (29573) Print this report
Comment to How to take Tcam picture for Pcal-X beam position monitor (29459)
I took the picture 19th May. (Sunday)
VAC (PRM)
shinji.miyoki - 9:59 Saturday 18 May 2024 (29572) Print this report
Comment to Replacement of pressure gauge on PRM vacuum pumping unit (29522)

Around 10:00, 1.0x10^-4 Pa.

DGS (General)
takahiro.yamamoto - 2:09 Saturday 18 May 2024 (29571) Print this report
balancing DAQ data rate of two NICs on k1dc0

Abstract

After we installed two NICs on k1dc0 for the DAQ stream (see also klog#29110), IPC glitch rate decrease as once per 1-2 days.
Because all glitches occurred on the front-end computers which were connected to the primary NIC and amount of data on the primary NIC was quite larger than one on the secondary NIC, I took a balance of the data rate of two NICs.
We probably need a couple of weeks in current glitch rate to conclude that the situation will be improved or not.

Details

We had installed secondary NIC on k1dc0 for the DAQ stream in order to disperse an amount of data in the work of klog#29110. At that time, we didn't modified the launch script of mx_stream and there was a bias in the amount of data in the two NICs. Fifteen of the 25 front-end computers were connected to the primary NIC with a data volume of 28.1 MB/s. The remaining 10 front-end computers were connected to the secondary NIC with a data volume of 13.6 MB/s.

After this update, glitch rate was decreased from a few ~ a few tens per day to once per 1-2 days. So dual NIC configuration seemed to have some effect to reduce IPC glitches.

Remaining glitches occurred only on the front-end computers which were connected to the primary NIC. As mentioned above, data rate and a number of the front-end computers on the primary NIC were larger than ones on the secondary NIC. So I guessed that data rate and/or a number of front-end computers were related to the glitches and took a balance of them on two NICs.

Since an assignment of front-end computers to each NIC is done in /diskless/root/etc/init.d/mx_stream, the way to determine the card number and endpoint number in this script was changed (original code is commented out). Now, 13 and 12 front-end computers are connected to the primary and the secondary NIC, respectively. And also, I modified an order of front-ends in /diskless/root/etc/rtsystab in order to take a balance of total data rate of each NIC (the old file is kept as rtsystab.20240517). Finally, data rate is also balanced as 21.3MB/s on the primary and 20.4MB/s on the secondary. Attachments are the list of front-end name, data rate, serial number of front-end, endpoint number and card number of before and after this work.

Because I'm not sure that a cause of remaining glitches is really the data rate or not, I don't know the situation is improved or not. Considering the current rate of glitches, it would take a couple of weeks to make any kind of conclusion about the effect of this work.
Non-image files attached to this report
DGS (General)
takahiro.yamamoto - 0:42 Saturday 18 May 2024 (29558) Print this report
Installation of a new ADC card for K1IOO1 (not yet completed)
Though I tried to add a new ADC board in K1IOO1 for f3 WFS, it couldn't be completed as reported in klog#29552.
After cables around the power breaker boxes will be cleaned up, I'll do again.

-----
As a preparation work for the next time, I installed a new AA chassis (S1307462) at U15 of the IOO1 rack. A power cable is already connected but the new AA chassis is not turned on yet (For checking AA chassis is not broken, I turned it on once and then turn it off after today's work). A SCSI cable is not connected because ADC board is not installed yet. ADC board, internal cable, adapter board, and SCSI cable are stored in the server room in the mine.
VAC (EYA)
takashi.uchiyama - 21:40 Friday 17 May 2024 (29570) Print this report
Comment to Cleaning of EYA (29489)
2024/05/17

mTakahashi, Uchiyama

We wiped in EYA.

- Wiped the body and the bottom of the chamber with alkaline ionized water and finished them with pure water.
- Wiped the bottom of the optical table with a neutral detergent and finished them with pure water.
I put my hand from the center hole of the optical table and wiped it off. I couldn't wipe the outside of the optical table and the arm side because I couldn't reach it.

I removed the PD-A before the cleaning.
MIF (General)
satoru.takano - 20:53 Friday 17 May 2024 (29569) Print this report
Comment to Investigation of the oscillation from the common mode servo (29503)

I forgot to mention it.
During this work we found a air-wirering of RF components around IOO0 rack as attached. Such a connection should be avoided.

Images attached to this comment
MIF (General)
satoru.takano - 19:57 Friday 17 May 2024 (29568) Print this report
Common Mode Servo Inspection: ALS1 rack

Aoumi, Kamiizumi, Tomura (mine), Takano (remote)

Abstract

We investigated the oscillation of the common mode servos installed in ALS1 rack one by one. The servos oscillated around 28 MHz as we expected. We also found an extremely large 50MHz peak from Summing node, which seems to come from GrPDH X/Y servo.
We also confirmed that GND of the input signal of FIB X/Y servo is well isolated from GND of FIB X/Y servo (above 1MΩ), that is the reason why these two servo is not oscillated.

Detail

From the previous measurement we confirmed that the common mode servos installed in ALS1 rack (PLL X/Y, CARM, Summing node) oscillate at some MHz. To identify which servo oscillates at which frequency, today we investigated the situation of the oscillation one by one.

Figure1 shows the spectrum with all the servo installed in the rack turned on measured by the same way in the previous measurement. We saw oscillation peaks at 17 MHz and 28.MHz. After turned off all these servo, we saw the spectrum shown in Figure2.

After that, we turned on each servo one by one. Figure3 and Figure4 show the spectra with PLL X turned on and PLL Y turned on, respectively. For PLL X a large peak exists at 28 MHz, which is likely to come from the oscillation of the servo. On the other hand, for PLL Y we ccouldn't see any peak around there. Then, we checked the length of the BNC cable between PFD and PLL X/Y servo, and found that the length is different between X and Y; 1m for PLL X and ~ 2.5m? for PLL Y (it was hard to confrim the actual length, but at least longer than 1m). Therefore, it seems that for PLL Y the length of the BNC cable is enough long not to oscillate.

Next, we chacked the signal from CARM servo and Summing node. FIgure5 shows the spectrum from CARM, and Figure6 is from Summing node. We found a large peak around 28 MHz, which seems to come from the oscillation due to the connection between Qmon of a I/Q demodulator and the servo. For Summing node no peak apeared around 28MHz, but an extremely large peak existed at 50 MHz. It was also confirmed that even if we turned off the power of Summing node this peak still exsists. Therefore, we suspected that this peak comes from Gr PDH servos.

We move on to Gr PHD X/Y servos. We measured the signal around SLOW OUT of each servo, which goes to the inputs of Summing node. The measured spectra are shown in Figure7 for PDHX and Figure8 for PDHY, respectively. It is obvious that there exists a peak at 50 MHz, and this implies that the peak in Summing node comes from Gr PDHX or Gr PDHY. But we are not sure of the source of the peak.

Finally, we checked the GND level of the input of FIB X/Y servos, which don't oscillate from the previous measurement. We measured the resistance between GND of the input BNC cable and GND of OUT2. The results are shown in the table below:

Servo IN1 IN2
FIB X 1.6 MΩ 1.6 MΩ
FIB Y 2 MΩ 1.6 MΩ

From the results it is clear that GND level of the input of these servos are well isolated from that of the servos, and that is the reason why these two servo don't oscillate.

Conclusion

We almost understand the condition of the oscillation; the common mode servo oscillates if the one of the input is close to GND level of the servo (≈ single ended signal from a circuit with the common GND level) and the cable length is enough short (<1 or 2m?).

On the other hand, we found a large peak at 50MHz existing in Gr PDH X/Y and Summing node. We don't know its source and should investigate it for future.

 

Images attached to this report
Search Help
×

Warning

×