[Ikeda, YamaT]
In recent these days, we replaced IO chassis from V1 to V2.
(K1TEST0 in klog#33560, K1IX1 in klog#36572, K1IY1 in klog#36625, K1EX0 in klog#36654, and K1EY0 in klog#36692).
Real-time models on these front-ends seemed to work well.
But they had a problem that a IRIG-B timing took so long time to be stable when the cold boot of the front-end computer was performed.
Last weekend, we realized that this might be caused by a mix-up of different fiber-optic cables.
So we replaced fiber-optic cables of K1TEST0, K1IX1, K1IY1, and K1EX0 to the correct combination.
After then, the issue of the timing synchronization taking too long no longer occurred.
We had no enough time to go to Y-end today, so we will do same thing for K1EY0 tomorrow.
The V2 IO chassis uses a new cable connection standard, and some of its specifications are different from those of the V1 IO chassis. The most significant difference is that the data transfer cable between the front-end computers and the IO chassis has been changed from custom-made HIB optical fiber to standard MTP fiber (OM3). Thanks to this change, we are no longer constrained by the limited availability of HIB cables and we can provide additional IO chassis as spare ones and/or for new instruments in future e.g. filter cavity.
In addition to this change, timing fiber cable in the IO chassis was also changed to the OM3 cable. Strictly speaking, the SFP port of the timing slave was directly accessible from the front panel on V1 IO chassis and any SFP module could be attached, so there were no internal cables. In other words, it is more accurate to say that the interface was changed rather than the cable standard.
Until now, OM1 cables have been installed for multimode optical fiber, not just for timing signals. (Since single-mode fiber, OS1, is used for long-distance connections, this is slightly off-topic.) So the most of timing slaves in the V1 IO chassis were connected to the timing fanout with OM1 cable and we had reused them for V2 IO chassis when we had replaced IO chassis from V1 to V2.
Although the real-time models themselves were running well also in this configuration, there remained a mystery as to why it took several hours for the IRIG-B timing to stabilize after a cold boot of the front-end computer. Figures 1 and 2 show the fluctuations in the IRIG-B value following a cold boot on EY0 and IX1, respectively. Both of them shows that it takes o(hours) time to stabilize the IRIG-B value within normal range (5-20us). And also, for EY0 (Fig.1), it's reproduced in 3 times trial.
The IRIG-B value is not the primary source of timing synchronization but merely serve as a cross-check. However, if they fall outside the normal range, RFM and Dolphin communication will fail. As a result, a global control cannot be done. Even if it took so long time to stabilize IRIB-B value, it finally went to normal range. So we was able to resume global control by waiting this fantastic time. But waiting time as o(hours) was serious from the view point of effective downtime of the digital system for keeping commissioning time and also observing time.
At first, we didn’t realize the relation between the IRIG-B issue and fiber cables standard, but we finally noticed that OM1 and OM3 cables have different core diameters (62.5 μm and 50 μm, respectively) and that mixing them can cause speed reductions and instability especially for the long distance connection in Ethernet case. To be honest, I wasn’t sure if the same discussion applied to Timing synchronization, but We decided to try unifying the fiber cable as OM3.
After replacing the fiber cable, we didn't need to wait that IRIG-B timing was stabilized. And it's not reproduced in a few times power-cycle. After replacing the fiber cable, we didn't need to wait that IRIG-B timing was stabilized. And it didn't appear also in a few times power-cycle. So we concluded that the IRIG-B issue was induced by the mismatch of the optical fiber standard. Today, we was able to complete a replacement work for K1TEST0, K1IX1, K1IY1, and K1EX0. Reaming K1EY0 will be processed tomorrow.
[Washimi, Takahashi]
We recovered the flags touching the OSEM bodies in the IM. We rotated the IM with the yaw picomotor, aligning the TM with the F0 yaw FR. The H3 flag was released easily (Pic. 1 and 2), then the H1 (Pic. 3) and V2 (Pic. 4) flags touched the OSEM bobies. Though we tried to find good relative geometry between the IM and the IRM, we could not find it. The strategy was changed; we adjusted the positions of the OSEM bodies, moving the base panel of the OSEMs. The axial positions of the H1, H3, and V1 OSEMs were also adjusted. We rotated the H2, H3, and V3 flags to be perpendicular to the LED-PD line in the OSEM.
We have updated the k1vissrmt model file and removed unnecessary DAQ channels.
Requests from Ushiba-san and R.Takahashi-san
Related to K-Log#36664
[K1SRM]
model: k1vissrmt
detail: Change the input ADC channels for ACC_H1, 2, and 3 from 20, 21, and 22 to 21, 22, and 23
Request from YamaT-san
We have commented out K1EDCU_TESTIOC.ini from the DAQ master file.
[Nakagaki, Tanaka, Ushiba, Hirose]
We plan to set up a new sub-laser and optics on the POS table in order to perform absolute measurements of PRCL and SRCL. Therefore, it is necessary to draw up a layout for the POS table such as JGW-T1909623. Today's work confirmed the current configuration of the optics.
We have asked Nakagaki-san to draw up detailed diagrams of the optics. This will be posted in JGWdoc or Klog.
With Hayakawa-san, SawadaH-san, TakahashiM-san, Yamaguchi-san, Ohmae-san
The Ncal pylons were moved from the X-end parking area to its original location under the beam pipe inside the X-end experimental room.
Before the relocation, the following preparations were performed:
Pictures: link
I measured the spectra of the FLDACCs. They look fine. The displacement noise was less than 0.1um/rHz at 0.1Hz.
I checked the IP transfer functions after removing the 1kg ballast ring. The resonant frequencies of the IP were changed from 51 to 55mHz for L and from 59 to 63mHz for T. They are still lower than the reference (63mHz for L, 70mHz for T).
[Nakagaki, Ikeda, YamaT]
We could solve the issue of missing PCIe card by replacing DAC#2.
After then ADC and DAC noise were measured for all channels and finally k1iy0 came back online.
Note that PEM speaker output cable is now unplugged from AI chassis because timing synchronization cannot be recovered with plugging speaker.
Please connect it when it'll be used.
According to experience on MCF, after the timing synchronization is established once, it can be connected.
But it sometimes makes an OS hang-up, so it's better to be unplugged again after using.
-----
Solving missed PCIe card issue
Today, we tried to reproducibility of yesterday's situation at first by removing BO card and then swapping DAC#0 and DAC#1. But OS couldn't find the DAC#1. So we concluded that removing BO card is not an ensured way to operate two DACs. Next, we replaced DAC#1 from S1809372 to S2516731. Then, two DACs could be operated even if BO card was also installed and reproducibility was also verified by power cycles in several times. We haven't identified a reason why S1809372 didn't work properly. But we doubt two cases now and will check them on the test bench.
1. That DAC was just damaged by the issue of speaker.
2. LIGO firmware wasn't applied to that DAC. (It was not purchased by DGS. So we don't know a detail of it.)
Noise measurement
After all PCIe cards were operated properly, noise level was measured for all ADC/DAC channels as shown in attached figures. Measurement configurations were completely same as ones in K1IX1 (klog#36586), K1IY1 (klog#36641), and K1EX0 (klog#36663). Measurement files were stored in /users/DGS/measurements/ADC/K1EY0/2026/0403_V2_IO_CHASSIS/.
RFM board replacement
When we were cleaning all stuffs up at the end of our work, we noticed RFM connection made an error bit. This was because a usage of wrong SFP modules single-mode or multi-mode. At the 1st floor of end stations, RFM fibers were laid by single mode fiber. On the other hand, the new V4 front-end computer which had been used as K1IY0 in the past had a multi-mode RFM board. So we moved single-mode RFM board from the old V3 front-end computer to the new V4 front-end computer and RFM connection came back online and GDS_TP screen showed all green. As the result, a remaining spare RFM board is multi-mode one.
Remaining concern
Only remaining concern is that it takes a few hours for synchronizing IRIG-B when a cold boot of the front-end computer is performed. In the past, similar things often occurred when down time became so long and a room temperature changed largely maybe because it took long time to be stable in the temperature of a crystal oscillator of timing slave. On the other hand, in this time, it takes long time even when a down time is only a few seconds. Such a thing wasn't reproduced in the test bench. So we have no idea to solve this issue now.
Fortunately, because this issue doesn't occur when the front-end computer is rebooted (there seems to be some difference between reboot and cold boot), we don't face this issue not so frequently. But reproducing and solving it on test bench is required from the view point of reducing a down time. If it's caused by the environment in the mine, we may need an additional test in the mine in stead of using test bench.
[Washimi, Takahashi]
We repaired the broken Lemo cable for the H3 FLDACC. Though the cable was connected to the H3 FLDACC again, there was still no signal. We checked the in-vacuum PTFE cable too. The connections of some pins (#1, #6, #7) were unstable or failed. We repaired the PTFE cable. We could confirm the signal and the actuation in the H3 FLDACC. After that, we adjusted the tilt of the all FLDACCs so that the LVDT signals can fluctuate around zero.
Additionally, we fixed the PTFE cable for the H2 FLDACC with a peek tie (Pic.) and removed the 1kg ballast ring from the IP.
By the way, we should have a jig, which was made at the beginning of 2024, to peacefuly rotate the flag so that the tip direction can align to the vertical direction.
[Ushiba, Washimi, Takahashi]
We found that the H1 (Pic. 1) and H3 (Pic. 3) OSEM flags on the IM are touching the OSEM bodies on the IRM. The H2 (Pic. 2) OSEM flag was also close to the OSEM body.
[Ushiba, Washimi, Takahashi]
We went to investigate the H1 and H3 FLDACCs. We opened the actuator housing and checked the movable range with the LVDT. It was from +80 (actuator side) to +1700 (LVDT side) in H1 and from -2500 to 750 in H3. When we removed the actuator yoke (Pic. 1), the range of the actuator side extended to -600 in H1 (Pic. 2). The actuator coil (Pic. 3) may be touching the yoke. We inserted the 0.5mm shim between the coil bobbin and the folded pendulum body in H1. The range did not change in H3. The folded pendulum has gone to an unstable mode in both H1 and H2. We removed the counterweight of 25g for H1 and 19g for H3. The transfer functions of these two folded pendulums were almost consistent and showed the same resonant frequency of 0.3Hz (Pic. 4). Before closing the top chamber, the Lemo cable for the actuator had broken in H3. Pic. 5 shows the displacement spectra of the H1 and H2 FLDACCs. The sensitivity of the H1 FLDACC was improved.
[Nakagaki, Ikeda, YamaT]
We replaced V3 front-end computer at U13-14 of EY0 rack and V1 IO chassis at U18-21 of EY0 rack to V4 front-end computer and V2 IO chassis, respectively.
Real-time models could be launched but the 2nd DAC cannot be found from OS by some reason.
Today, we had no enough time to investigate what happen, so we will continue this work tomorrow.
-----
Power distribution trouble
Because it's hard to transport many large stuffs to EYC area via EYA booth, two V2 IO chassis (for EYV1 and EY0) and a V4 front-end server (for EY0) were moved to EYC area via mine entrance at Mozumi. V1 IO chassis at U18-21 of EY0 rack was replaced to one of these V2 IO chassis (S2416123). But Adnaco boards in IO chassis weren't driven even when main power switch was enabled. DC24V was surely supplied at the power supply board (ATX-M4), so it seems to be a malfunction of downstream of the power distribution board. According to Ikeda-san, he had never turned on this IO chassis before, so it may be an initial defect. Anyway we gave up to use it and used another IO chassis (S2416124) which would be used for EYV1 in future.
Timing lost issue maybe due to PEM speaker system
By using S2416124 with V4 front-end computer, real-time models could be launched. But we found soon that the timing synchronization didn't work because ADC#0_CH31 for timing duotone signal was contaminated by constant ~-4000ct signal. Though we tried to reboot models and/or the front-end computer in several times, it wasn't recovered.
During these trials, we noticed that the PEM large speaker output laud sounds when some stuffs turned ON and OFF. And also, when AI chassis connected with PEM speaker was turned OFF, PEM speaker output continuous sounds. Finally timing synchronization came back by unplugging DB9 cable of speaker output from AI chassis. Even when that AI chassis turned OFF, timing signal still lost if DB9 cable was kept a connection, so GND connection (shell or #5 pin of DB9) between the digital system and the speaker system seemed to be a cause of this issue.
By the way, a same speaker system also made a hang-up trouble on MCF rack in several times (klog#25343, klog#29433, klog#33565, etc.). It might be better to reconsider the design of how the speaker system is connected to the digital system. If this affects the timing, while a rough coincidence analysis might be possible, a coherence analysis is no longer reliable.
Missing PCIe card issue
After recovering timing synchronization, we noticed that the 2nd DAC (DAC#1) wasn't found by the IOP model. According to lspci command, OS also couldn't found the DAC#1. So it seemed an issue on hardware of common Linux not an issue on LIGO real-time software. Though EY0 had 3 ADC, 2 DAC, 1 BIO, and 1 BO, this configuration hadn't been tested in the test bench. So we doubted the card combination issue that was often seen with V1 IO chassis and removed 1 BO card in order to make a same configuration as EX0 that was successful in klog#36654. But DAC#1 was still missed.
As the next, to know which DAC card was assigned as DAC#1, we swapped two DAC cards (but we noticed soon we cant know by this method) and restarted the front-end computer. At that time, ADC#0_CH#30 shows a some response that is assigned for duotone loop back when DAC duotone was enabled. After then, to check the duotone loop back in the original configuration, we restored two swapped DACs and restarted. Then DAC#1 was found by OS and the IOP model though I have no idea why it came back.
Because DAC#1 came back, we also restored the removed BO card to back to the original configuration, then DAC#1 was missed again... We had no enough time to continue this investigation today. So this work will be continued tomorrow.
Thoughts
Swapping DAC and restoring BO were done without any SCSI and DB37 pin connection. So this issue is now related to only IO chassis or PCIe cards not related to the connection of circuits.
All used PCIe cards were just moved from V1 IO chassis to V2 one today. So if we didn't break them in today's work, PCIe cards themselves should be no problem. Only concern is that DAC#1 was damaged by speaker issue. We plan to take a spare DAC card tomorrow just in case.
If the configuration of EY0 (3 ADC, 2 DAC, 1 BIO, and 1 BO) has a problem, removing BO permanently may become a solution. It's a same config. of EX0, so it's a reasonable solution though the k1caley model must be updated. IX1 and IY1 have 3ADC, 3DAC, 5 BIO, and 1 BO, so using BO itself should be no problem. On the other hand, it's not so surprising there is a card combination issue (a number of each card type, used slot etc.) with V2 IO chassis because we faced such kind of issue with V1 IO chassis in the past. Of course, it may be a malfunction of used BO card, so a check with spare BO card is also necessary tomorrow.
If S2416124 has also a problem (accroding to Ikeda-san, it also hadn't been used in the test bench, so ), we must consider to restore to V1 IO chassis and V3 front-end computer. In this situation, schedule to use Mozumi entrance again (bring back to problematic stuffs and take new stuffs to EY again) is also serious concern.
Since today's results revealed insufficient tests in the test bench, we will likely need to reconsider and accelerate the operation of the test bench.
[Nakagaki, Ikeda, YamaT]
We could solve the issue of missing PCIe card by replacing DAC#2.
After then ADC and DAC noise were measured for all channels and finally k1iy0 came back online.
Note that PEM speaker output cable is now unplugged from AI chassis because timing synchronization cannot be recovered with plugging speaker.
Please connect it when it'll be used.
According to experience on MCF, after the timing synchronization is established once, it can be connected.
But it sometimes makes an OS hang-up, so it's better to be unplugged again after using.
-----
Solving missed PCIe card issue
Today, we tried to reproducibility of yesterday's situation at first by removing BO card and then swapping DAC#0 and DAC#1. But OS couldn't find the DAC#1. So we concluded that removing BO card is not an ensured way to operate two DACs. Next, we replaced DAC#1 from S1809372 to S2516731. Then, two DACs could be operated even if BO card was also installed and reproducibility was also verified by power cycles in several times. We haven't identified a reason why S1809372 didn't work properly. But we doubt two cases now and will check them on the test bench.
1. That DAC was just damaged by the issue of speaker.
2. LIGO firmware wasn't applied to that DAC. (It was not purchased by DGS. So we don't know a detail of it.)
Noise measurement
After all PCIe cards were operated properly, noise level was measured for all ADC/DAC channels as shown in attached figures. Measurement configurations were completely same as ones in K1IX1 (klog#36586), K1IY1 (klog#36641), and K1EX0 (klog#36663). Measurement files were stored in /users/DGS/measurements/ADC/K1EY0/2026/0403_V2_IO_CHASSIS/.
RFM board replacement
When we were cleaning all stuffs up at the end of our work, we noticed RFM connection made an error bit. This was because a usage of wrong SFP modules single-mode or multi-mode. At the 1st floor of end stations, RFM fibers were laid by single mode fiber. On the other hand, the new V4 front-end computer which had been used as K1IY0 in the past had a multi-mode RFM board. So we moved single-mode RFM board from the old V3 front-end computer to the new V4 front-end computer and RFM connection came back online and GDS_TP screen showed all green. As the result, a remaining spare RFM board is multi-mode one.
Remaining concern
Only remaining concern is that it takes a few hours for synchronizing IRIG-B when a cold boot of the front-end computer is performed. In the past, similar things often occurred when down time became so long and a room temperature changed largely maybe because it took long time to be stable in the temperature of a crystal oscillator of timing slave. On the other hand, in this time, it takes long time even when a down time is only a few seconds. Such a thing wasn't reproduced in the test bench. So we have no idea to solve this issue now.
Fortunately, because this issue doesn't occur when the front-end computer is rebooted (there seems to be some difference between reboot and cold boot), we don't face this issue not so frequently. But reproducing and solving it on test bench is required from the view point of reducing a down time. If it's caused by the environment in the mine, we may need an additional test in the mine in stead of using test bench.
[Kimura and Yasui]
It was confirmed that the water level in the tank of the cooling water system attached to the X-10 vacuum pump unit on the X-arm had decreased to approximately one-third of its normal level.
Consequently, on April 1, we performed a complete tank replacement, which also served to refill the water.
After the replacement work, we left the cooling water system cover off to monitor the rate of water loss.
During the routine inspection on April 3, we will reinstall the cooling water system cover after confirming that there are no further water leaks.
I measured L & R for each IM H coils at the feedthroughs (with cross cables), pin 2-7, with the LCR meter at 100Hz
| H1 | H2 | H3 | |
| L [mH] | 8.71 | 8.71 | 8.67 |
| R [Ω] | 19.8 | 19.8 | 19.6 |
Not only H1 but also H3 FLDACC does't seem healthy.
It is very suspicious that they are rubbing somewhere.
To investigate if the pendulum itself is healthy or not, I performed the DC response ceck for FLDACCs.
Procedure is as folows:
1. Engage FLDACC feedback controls so that pendulum is aligned with respect to the LVDT.
2. Turn off INPUT of FLDACCSERVO filter banks to hold the outputs.
3. Change offsets as +100, 0, -100, and 0 with the ramp time of 60 seconds.
Figure 1, 2 and 3 show the result of H1, H2, and H3 FLDACC signals, respectively.
H1 FLDACC signals moves smoothly in the positive direction while not in negative direction.
H2 FLDACC signals moves smoothly in both directions.
H3 FLDACC signals moves smoothly in the positive direction while not in negative direction as well as H1.
So, only FLDACC H1 seems healthy.
Since there is large hysteresis in H1 and H3 FLDACCs, it is suspicious that these proof masses are rubbing somewhere.
[Ushiba, Washimi, Takahashi]
We went to investigate the H1 and H3 FLDACCs. We opened the actuator housing and checked the movable range with the LVDT. It was from +80 (actuator side) to +1700 (LVDT side) in H1 and from -2500 to 750 in H3. When we removed the actuator yoke (Pic. 1), the range of the actuator side extended to -600 in H1 (Pic. 2). The actuator coil (Pic. 3) may be touching the yoke. We inserted the 0.5mm shim between the coil bobbin and the folded pendulum body in H1. The range did not change in H3. The folded pendulum has gone to an unstable mode in both H1 and H2. We removed the counterweight of 25g for H1 and 19g for H3. The transfer functions of these two folded pendulums were almost consistent and showed the same resonant frequency of 0.3Hz (Pic. 4). Before closing the top chamber, the Lemo cable for the actuator had broken in H3. Pic. 5 shows the displacement spectra of the H1 and H2 FLDACCs. The sensitivity of the H1 FLDACC was improved.
[Washimi, Takahashi]
We repaired the broken Lemo cable for the H3 FLDACC. Though the cable was connected to the H3 FLDACC again, there was still no signal. We checked the in-vacuum PTFE cable too. The connections of some pins (#1, #6, #7) were unstable or failed. We repaired the PTFE cable. We could confirm the signal and the actuation in the H3 FLDACC. After that, we adjusted the tilt of the all FLDACCs so that the LVDT signals can fluctuate around zero.
Additionally, we fixed the PTFE cable for the H2 FLDACC with a peek tie (Pic.) and removed the 1kg ballast ring from the IP.
I measured the spectra of the FLDACCs. They look fine. The displacement noise was less than 0.1um/rHz at 0.1Hz.
Thanks to the guardian modification, FLDACCs' local controls can be implemented into the guardian.
They are now automatically engaged when SRM is going to the READY state.
Thanks to the guardian modification, FLDACCs' local controls can be implemented into the guardian.
They are now automatically engaged when SRM is going to the READY state.
More pictures: link
The actuators in the IM H1/2/3 OSEMs are not working.
[Washimi, Takahashi]
We went to the recovery work.