Reports of 33994
DGS (General)
takahiro.yamamoto - 20:31 Monday 23 March 2026 (36635) Print this report
Preparation and consideration for the IO chassis replacement at the end stations
[Ikeda, Nakagaki, YamaT]

We carried a V2 IO chassis and a V4 front-end computer to the X end for the new stuffs for K1EX0.
The V2 IO chassis is located at the front of EX0 rack and the V4 front-end is in EX1 rack.
These stuffs will be used in the replacement work from maybe day after tomorrow.
The status of each station regarding the replacement work is as follows.
It seems that replacement work can only be carried out on the first floor at both end stations over the next couple of weeks.

X-end 1F
Replacement work can be fully conducted.
New fiber cables between EX1 (2F) and EX0 (1F) were already laid.
MTP breakout cable (1-2m) is required to connect the front-end computer to the splice box.
An edge at EX0 side of MTP cable accommodates the input port of V2 IO chassis.
One SMF-SFP module is required for connecting the V4 front-end to the RFM switch by using the RFM board attached to the current V3 front-end.
KVM, UTP for DGS, UTP for DAQ, SMF for RFM, MMF for Timing, and BNC for IRIG-B are necessary in EX1 rack
A new 24V DC power supply for IO chassis is required
If 24V-10A isn't enough, 24V-30A power supply, 5U space and AC-20A source must be prepared.

Y end 1F
IO chassis can be replaced but the front-end computer can't be moved to 2F yet.
New fiber cables between EY1 (2F) and EY0 (1F) haven't yet been laid.
Removing V3 front-end and installing V4 one there is current best solution.
MTP breakout cable (1-2m) is required to connect the front-end computer still in 1F and the V2 IO chassis.
All other cables can be reused as ones used by V3 front-end computers.
A new 24V DC power supply for IO chassis is required.
If 24V-10A isn't enough, 24V-30A power supply, 5U space and AC-20A source must be prepared.

X/Y end 2F
We cannot start until purchasing a 20-25m MTP breakout cable for each station in the next fiscal 2026.
All necessary items except 20-25m MTP breakout cable and a cabling work are right there
A same working procedure as ones for ITMs can be applied for ETMs
VIS (SRM)
naoatsu.hirata - 17:52 Monday 23 March 2026 (36634) Print this report
Comment to Assembly of new mirror (36579)

SRM mirror was carried from Mitaka to Kamioka. Wedge direction was re-adjusted.

・Once SRM mirror was disassembled, and re-adjusted wedge direction.(pic1)
・Re-assembled it using 1.4mm shim.
・Apply first-contact on HR side, and pill-off for cleaning. It took about 1 hour. It was easier than when Aso-san peeled it off in Mitaka last week. 
・Pill-off AR side first-contact. AR side first-contact was applied 1 week ago. We worried about it's difficult to pill off as same as HR side in Mitaka, but It was very easy.
・Apply  first-contact on AR side again.(pic2)

Images attached to this comment
DGS (General)
takahiro.yamamoto - 17:46 Monday 23 March 2026 (36633) Print this report
Comment to Deployment of V2 IO-chassis and the front-end computer for ITMY (36625)
[Ushiba, Ikeda, Nakagaki, YamaT]

Ushiba-kun found FLDACC loop couldn't be engaged though ITMY was achieved ISOLATED state. IP, GAS and BF loops seemed to work so ADC0 including IP, GAS, BF and FLDACC sensors and DAC0 including IP, GAS, and BF actuators surely worked well. On the other hand, even the DAC output for FLDACC (DAC2) increased, any motion couldn't be found on FLDACC sensors.

We didn't touch DB9 cables for replacing I/O chassis (klog#36625), so we was able to guess a wrong cabling of SCSI cables as the most likely cause soon. So we monitored MNIMV sensors (if our guess is correct, FLDACC actuators had to be swapped to MNIMV actuators) while the DAC output was increased. Then, payload sensors showed some motion.

We entered the mine for fixing swapped SCSI cables for DAC1 and DAC2. Finally, FLDACC loop worked fine after fixing swapped cables. When we replaced the IO chassis, we connected SCSI cables to ADC/DAC boards attached the new IO chassis based on the original tags attached the SCSI cables as "DAC0", "DAC1", and "DAC2". But these tags were faking actual connection. Even if the original tag looks correct, it seems we shouldn't trust it and should re-make new tags. Anyway, we removed original wrong tags and attached new correct tags for those SCSI cables.
VIS (SRM)
ryutaro.takahashi - 17:19 Monday 23 March 2026 (36632) Print this report
Comment to Assembly of new mirror (36579)

[Hirata, Washimi, Takahashi]

We removed the 70% mirror (SRM-M) from the Al test mass (Picture 1). At the beginning, we removed the black cylinder from the rear side and removed the mirror holder (Picture 2) by pushing with a Teflon bar from the rear side. We put the extracted 70% mirror on the covered table (Picture 3).

Images attached to this comment
CRY (General)
shoichi.oshino - 15:16 Monday 23 March 2026 (36631) Print this report
Comment to Stopping water pump monitoring (36368)
After finishing Xend cryocooler maintenance, I reverted Xend water pump alert.
DetChar (General)
takahiro.yamamoto - 11:26 Monday 23 March 2026 (36630) Print this report
Update an environment for the online range-estimation code
Online range-estimation code was moved from k1script1 (legacy system) to k1script0 (new Debian13 system).
As part of this change, the used Conda environment has also been changed from Python 3.9 to Python 3.11.
This update doesn't contain a change in the way to estimate binary ranges.

-----
Online range-estimation code had been running on the k1script1 which is legacy environment and had bumped up the version of used gwpy in order to use a modern estimation method (see also klog#33063). But the used Conda environment for this script had been still Python 3.9. Python 3.9 was deprecated for a while and became obsolete in last November. So I decided to bump up Python version as 3.11 with the system migration. (Though Python 3.12 and 3.13 are already available and system Python of k1script0 is 3.13, stable release of IGWN-conda subset is still Python 3.11. So I decided to use Python 3.11.)

This update doesn't contain a change in the range estimation itself. So consistency with before is kept for now. On the other hand, locking the RSE will hopefully change the shape of the sensitivity curve drastically, so the filter processing used in the preprocessing stage will likely need to be adjusted in the future.
DGS (General)
takahiro.yamamoto - 17:08 Sunday 22 March 2026 (36629) Print this report
Forced reboot of k1nfs0

k1nfs0 didn't respond any request from another nodes.
Since it did not respond to commands also from the physical console, I performed a forced reboot.
I also recovered NFS connection (/kagra and /users) on each NFS clients.
If you will find unrecovered nodes (I might forget), please let me know.

-----
I noticed a new console couldn't be launched on the workstations due to a disconnection of NFS region. According to some process logs depending on NFS region, it was dead around 2:30am-2:40am. It's occurred multiple nodes, so it seemed a problem on k1nfs0 or the core network switch instead of workstations. Checking the console of k1nfs0, then it wasn't respond any command with messages shown in Fig.1.

According to the messages, CPU didn't seem to come back to controllable state by the kernel after some kind of task was doing on that CPU core. Anyway, reboot and shutdown commands couldn't be executed, I performed a forced reboot. BTW, BMC interface wasn't available because the primary NIC port was used for the PICO network instead of the DGS network. So I reboot it by the power switch instead of BMC power control interface. When BMC is used as shared LAN mode, the primary NIC must be assigned for the DGS network. We must change NIC settings by NetworkManager or must set BMC as the dedicated LAN mode for the future trouble shooting. Otherwise, physical access will be required during an emergency recovery, making remote recovery impossible.

After rebooting, k1nfs0 could be launched. I checked the boot logs and then, found the boot disk was mounted as RO mode once and then re-moutned as RW mode. On the other hand, there was no SMART report for all disks including the boot disk. Though k1nfs0 is running in normal now, SATA path (cable, controller or route on motherboard) might be asing. It may be a good time to replace the hardware (and also a legacy OS).

Finally, I recovered NFS connection on all clients. Then the system was recovered. But there are several tens of clients and I might miss some of them. If you found unrecovered nodes, please let me know.

Images attached to this report
DGS (General)
takahiro.yamamoto - 21:07 Saturday 21 March 2026 (36628) Print this report
Minor fix of user's Python codes for Python3.12 and later
In the future upgrade of the workstations, Debian12 system will be replaced to Debian13 system.
As a result, the Python version will be also bumped up from 3.11 to 3.13.
In Python 3.12 and later, several legacy syntaxes are treated as SyntaxWarnings and Errors, so I have modified some user's Python codes to avoid this.
Since the revised syntax is compatible with also Python 3.11, it will not affect the behavior of current Debian 12 system.

Updated files in this time are as follows.
_slackpost.py
cdslib.py


These files were found when I migrated a script for watching various kind of changes in configuration and notifying #observation channel from k1script1 to k1script0. There are probably some other Python codes including Guardian codes which are incompatible with 3.12 and later. We need to correct any such issues as they are found.
MIR (SRM)
yoichi.aso - 0:38 Friday 20 March 2026 (36627) Print this report
Fizeau measurements of the 2-inch SRM with various mounting pressure

Hirata, TakahashiR, Kohara (ATC), Tsuzuki (ATC), Yasugaki (ATC), Ebizuka (Riken), Suzuki (TMT), Akutsu, Aso

We measured the surface figure of the 85% 2-inch SRM with different shim thicknesses to find the suitable mounting configuration.

Conclusion

Consistent with Hirose-san’s report, we concluded that using a 1.4 mm thick shim is appropriate.

Note: We were initially concerned that a 1.4 mm shim might provide insufficient pressure to hold the mirror. However, even when the mount was rotated to various orientations, no displacement of the mirror was observed. In addition, the O-ring adheres to the glass surface and helps maintain the mirror position without requiring strong compression. In practice, when we pulled the ⑤ part out of the mirror mount assembly, the O-ring and the mirror came out attached as a single unit.

Measurement Setup

We used a Zygo GPI-XP interferometer with the following configuration.

A λ/10 reference flat was installed in the interferometer. The SRM under test was mounted in a holder, which was supported by two posts and attached to an adjustment stage combining an XYZ stage and a goniometer.
The temperature of the room was 22.8C, which is similar to the temperature in the KAGRA tunnel.

Results

Shim thickness = 1.4mm

We assembled the SRM holder with 1.4mm thick shims first. This is the recommended thickness in Hirose-san's report.
(We used a combination of two 0.5mm shims and four 0.1mm shims)

The surface figure is beautifully spherical as shown below on the left.
The right hand figure shows the surface figure with the power term removed.

The fitted power term is 571.511nm which translates into the RoC of 462.8m. (Note that the RoC specification of SRM is 458m+/-20m)
The residual RMS after the subtraction of the Zernike terms up to power is 10.728nm. (Note that in Hirose-san's report, this value was 13.3nm for 1.4mm shims).

Shim thickness = 1.3mm

We then removed 0.1mm shims to make the total thickness 1.3mm.

The measured results show clear astigmatism.
The RoC from the power term is 450m. The residual RMS is 103.4nm. While the RoC is still within the tolerance of the specification, the residual RMS is 10 times larger than the 1.4mm case.
This is very similar to the 1.3mm result in Hirose-san's report.

Left: only piston and tilt removed,  Right: power is also removed

Shim thickness = 1.4mm again

We then put the 0.1mm shims back to measure the 1.4mm case again to check the repeatability.

The result is fairly consistent with the previous 1.4mm measurement.
RoC = 458.0m, residual RMS = 13.2nm

Left: only piston and tilt removed,  Right: power is also removed

Closing Remarks

The obtained results are remarkably consistent with Hirose-san's results.
Also it is quite repeatable.
Note that we have two sets of the SRM mirror mount assembly. What we used this time is a different one from the one Hirose-san used. Still the results are quite similar to each other.
There must be a good mechanical reason for this behavior.

 

 

Images attached to this report
DGS (General)
takahiro.yamamoto - 18:12 Thursday 19 March 2026 (36626) Print this report
Comment to Deployment of V2 IO-chassis and the front-end computer for ITMX (36572)
We removed the old K1IX1 front-end from U20-21 of ICV rack.
The removed server was moved to SK server room.
DGS (General)
takahiro.yamamoto - 18:11 Thursday 19 March 2026 (36625) Print this report
Deployment of V2 IO-chassis and the front-end computer for ITMY
[Ikeda, Nakagaki, YamaT]

This is a same work as klog#36572 for ITMX.
V1 IO chassis (S1807864) in IYV1 rack was replaced as V2 one (S2416129).
ADC/DAC noise check still remains, but replacing IO chassis itself was finished.

-----
As the same procedure in ITMX case, V2 IO chassis was launched at outside of IYV1 rack without connecting any circuit. This was a connection test of a new MTP fiber cable between the mine server room and the IYV room. At that time, the front-end was newly installed at U27-28 of C2 rack (but still named as K1IZ1) and K1IY1 at U18-19 of ICV rack was alive with the old configuration. Thanks to the fiber cabling and labeling work yesterday, it can be done smoothly and there was no issue on the connection test.

After the connection test, we stopped both K1IZ1 in C2 rack and K1IY1 in ICV rack, removed V1 IO chassis from U6-9 of IYV1 rack, installed V2 IO chassis to there, and plugged all cables to V2 IO chassis. (In the ITMX case, we connected V2 IO chassis at outside of the rack with circuits at first, but we concluded this procedure can be skipped in the yesterday's experience.) And then, V2 IO chassis was launched with the new K1IY1 in C2 rack which was renamed from K1IZ1. ITMY models were able to be launched without any timing and PCIe activation errors.

We haven't yet measured ADC/DAC noise with new hardware configuration. So replacement work isn't fully completed. On the other hand, noisy channel issue will be mitigated by replacing ADC/DAC board not IO chassis, so we can say replacing IO chassis itself was finished. We plan to measure the ADC/DAC noise in the new configuration on next Tuesday.

V1 IO chassis and the old K1IY1 front-end are still left in IYV room and ICV rack, respectively. After seeing weekend situation, we will return them back to Mozumi for the next replacement work.
Comments to this report:
takahiro.yamamoto - 17:46 Monday 23 March 2026 (36633) Print this report
[Ushiba, Ikeda, Nakagaki, YamaT]

Ushiba-kun found FLDACC loop couldn't be engaged though ITMY was achieved ISOLATED state. IP, GAS and BF loops seemed to work so ADC0 including IP, GAS, BF and FLDACC sensors and DAC0 including IP, GAS, and BF actuators surely worked well. On the other hand, even the DAC output for FLDACC (DAC2) increased, any motion couldn't be found on FLDACC sensors.

We didn't touch DB9 cables for replacing I/O chassis (klog#36625), so we was able to guess a wrong cabling of SCSI cables as the most likely cause soon. So we monitored MNIMV sensors (if our guess is correct, FLDACC actuators had to be swapped to MNIMV actuators) while the DAC output was increased. Then, payload sensors showed some motion.

We entered the mine for fixing swapped SCSI cables for DAC1 and DAC2. Finally, FLDACC loop worked fine after fixing swapped cables. When we replaced the IO chassis, we connected SCSI cables to ADC/DAC boards attached the new IO chassis based on the original tags attached the SCSI cables as "DAC0", "DAC1", and "DAC2". But these tags were faking actual connection. Even if the original tag looks correct, it seems we shouldn't trust it and should re-make new tags. Anyway, we removed original wrong tags and attached new correct tags for those SCSI cables.
DetChar (General)
hirotaka.yuzurihara - 14:51 Thursday 19 March 2026 (36624) Print this report
Comment to Preparation for the upgrade of Pastavi server (36596)

I finished the replacement of the Pastavi server at Kashiwa (in computer room of daini sougou tou). Now the new server is working well for the Pastavi.
From the user side, the usage is same as previous. The way to access the server is written in the document.

I tested most of the options in several modes including the noise budget mode. As I checked, the options is available on the new environment.
If you faced any issue, please share the information with me.

The remained task is to develop the new scheme to handle several job by using HTCondor.

CAL (Pcal general)
Misato Onishi - 14:40 Thursday 19 March 2026 (36623) Print this report
WSK calibration at UToyama

Date: 2026/03/19

Member: Dan Chen, Shingo Hido, Misato Onishi

We performed our usual WSK calibration at UToyama.

The results look no problem.

Results

Case Alpha (Main Value) Alpha (Uncertainty)
Front WSK, Back GSK -0.910663 0.000331
Front GSK, Back WSK -0.911063 0.000076

Comparison with Previous Results

Comparing with previous results, no significant issues were found.
Attached graph is the result summary including the latest measured data.

Images attached to this report
DGS (General)
takahiro.yamamoto - 22:43 Wednesday 18 March 2026 (36619) Print this report
Comment to Deployment of V2 IO-chassis and the front-end computer for ITMX (36572)
[Ikeda, Nakagaki, YamaT]

Finally, V2 IO chassis S2416127 was installed at U6-9 of IXV1 rack after removing V1 IO chassis S1807864.
ITMX is currently operated using a fully installed V2 IO chassis and new MTP optical fiber cabling.
This marks the first successful instance of a DGS hardware upgrade toward O5.

-----
IO chassis for ITMX was replaced from V1 to V2 in klog#36572. But V2 IO chassis was still put at the outside of IXV1 rack because V1 IO chassis was still left in IXV1 rack for easy and quick recovery to the old configuration. After then, we confirmed that ITMX can stay in LOCK_ACQUISITION stably with V2 IO chassis. So we removed the V1 IO chassis from the IXV1 rack and installed V2 IO chassis there, today. With this work completed, ITMX is now operated with V2 IO chassis and a new MTP fiber cable which are fully installed. ITMX now achieved final configuration for O5 and this is the first success of upgrading DGS hardware.

To reduce downtime by DGS upgrade after O5, it is important to do same upgrades on as many other racks as possible before O5. Currently, we plan to do it for ITMY during SRM replacement and for ETMX/ETMY after IR1. ~40 working days will be required only for a same work about remaining 20 front-ends (because of limitation on equipments, working space and conflict each other, parallel upgrades for multiple front-end is probably difficult). According to an experience in this time, fortunately, we can probably do same upgrade works in ~ 2 days per front-end with restoring a front-end operation every day during those 2 days. So, it might be a good idea to consider using the two-week period (two maintenance days) for same upgrade works on remaining front-ends also after the SRM replacement work.

By the way, V1 IO chassis was still left in IXV room and old IX1 front-end was also still in ICV rack (RFM board was stolen today for the next upgrade of ITMY). These equipments can return back to the mine or SK server room anytime though I'm not sure sufficient space in both server rooms.
VAC (Valves & Pumps)
nobuhiro.kimura - 20:15 Wednesday 18 March 2026 (36618) Print this report
Comment to Switching from TMP to Ion Pumps (36549)

[Kimura and M. Takahasi]

 On March 18, we switched  form TMP to the Ion pump of the #36 vacuum pump unit at the Y-end .

After the activation of the #36 Ion pump, we turned off the #36 TMP and dry pump

  

CRY (General)
nobuhiro.kimura - 20:03 Wednesday 18 March 2026 (36617) Print this report
Comment to Maintenance Work on the Duct-Shield Cryo-coolers for IXC and IYC was Completed (36605)

[Kimura and M. Takahashi]

 On March 18, we completed maintenance work on the duct-shield cryo-coolers for the  EYC.
We are planning to restart the duct-shield cryo-coolers starting March 23, which will also serve as a test run.

CRY (General)
nobuhiro.kimura - 20:01 Wednesday 18 March 2026 (36616) Print this report
Comment to Cryo-cooler Unit Maintenance Work (36134)

[Kimura and M. Takahashi]
 As part of maintenance work on the cryogenic cooling units, we removed two valve units from the radiation shield cryo-coolers (P-53 and P-55) of EYC.
The removed valve units were packaged and returned to the manufacturing plant, where they will be disassembled and inspected.

 We plan to remove the remaining valve units from the IXC, IYC, and EXC radiation shield cryo-coolers early next week.

CAL (XPcal)
dan.chen - 12:19 Wednesday 18 March 2026 (36615) Print this report
XPcal calibration

KAGRA Pcal-X updates (2026/03/18)

Workers: Kohei Mitsuhashi, Dan Chen

We performed monthly Pcal-X calibration on 2026/03/18.

After the calibration, we updated EPICS parameters related to the Pcal-X system. No issues were found.

EPICS Key Before After Δ (After − Before)
K1:CAL-PCAL_EX_1_OE_R_SET 0.98489 0.98417 -0.00072
K1:CAL-PCAL_EX_1_OE_T_SET 0.98489 0.98417 -0.00072
K1:CAL-PCAL_EX_1_PD_BG_RX_V_SET -0.00381 -0.00386 -0.00005
K1:CAL-PCAL_EX_1_PD_BG_TX_V_SET 0.00689 0.00602 -0.00087
K1:CAL-PCAL_EX_1_RX_V_R_SET 0.50206 0.50202 -0.00004
K1:CAL-PCAL_EX_2_INJ_V_GAIN 0.95148 0.95101 -0.00047
K1:CAL-PCAL_EX_2_OE_R_SET 0.97457 0.97404 -0.00053
K1:CAL-PCAL_EX_2_OE_T_SET 0.97457 0.97404 -0.00053
K1:CAL-PCAL_EX_2_PD_BG_TX_V_SET 0.00551 0.00519 -0.00032
K1:CAL-PCAL_EX_2_RX_V_R_SET 0.49794 0.49798 0.00004
K1:CAL-PCAL_EX_WSK_PER_RX_SET 1.48800 1.48915 0.00115
K1:CAL-PCAL_EX_WSK_PER_TX1_SET 0.52710 0.52744 0.00035
K1:CAL-PCAL_EX_WSK_PER_TX2_SET 0.38759 0.38799 0.00040

 

Images attached to this report
VIS (IY)
ryutaro.takahashi - 9:24 Wednesday 18 March 2026 (36614) Print this report
Offload of GAS filters

I offloaded the F0 and F1 GAS filters with the FRs.

PEM (Center)
takaaki.yokozawa - 9:03 Wednesday 18 March 2026 (36613) Print this report
First test of the ground motion at the new laser room
I measured the ground motion by the portable trillium compact system at the new laser room at Kamioka (neutrino) building during the last weekend.
Attached spectrum was around 16th 00:00:00(JST)
Compared from the klog36509,
- The trend of the XY and Z was different.
- Large peaks detected at 6 Hz in XY direction
- Above 10 Hz, the ground motion was larger than the KAGRA hokubu-kaikan 2F
- Due to the vibration from keisanki room at 1st floor?

Next , check the spectrum with traffic situation (track and so on)
Images attached to this report
CRY (General)
nobuhiro.kimura - 8:45 Wednesday 18 March 2026 (36611) Print this report
Comment to Maintenance Work on the Duct-Shield Cryo-coolers for IXC and IYC was Completed (36605)

[Kimura and Yasui]

Parts replaced during maintenance work
1. Absorber in the helium compressor
2. Filter unit at the supply side
3. Gas replacement inside the cold head in accordance with SHI work procedures
4. Disassembly and maintenance of the valve unit at the SHI factory
5. Gas replacement in the compressor and flexible tubing
(1) Helium gas used: G-1 grade
(2) Filling pressure: 14.8 bar

PEM (Center)
takaaki.yokozawa - 8:45 Wednesday 18 March 2026 (36612) Print this report
Comment to Tapping test at PSL room (36594)
Global trend
Similar shape in POW PMC OUT and IP QPD P,Y
large signal in IMMT1 trans if POW PMC OUT became large
Several different in IMC refl QPD 1 and 2
Distance dependence of the response of IP1, 2
1 inch : basically around 500 Hz had resonant frequency
2 inch : basically around 330 and 470 Hz have peaks
150 Hz(M18-1) and 210 Hz (M17) is special resonant frequency

Before the PMC
Relatively small signal in the IMC refl sensors
large peak at POW PMC OUT
340(M04’), 360(M3’) and 680(M04’) Hz may come from the 2 inch mirror before the PMC

After the PMC
M5 1 inch : some 520 Hz signal in IMC refl sensors, several peak in IP pitch signal
M6 1 inch : Quite large peak in 470 Hz in IMC refl sensors, IP peak except for QPD1 yaw ,Small peak in IMMT1 trans
M7 2 inch : some peak in 470 Hz in IMC refl, IP, ACC1, small peak in IMMT1 trans P>Y
M8 1 inch : Peak at 520 Hz in IMC refl and IP, DC QDP2 pitch largest
M9 1 inch : Peak in 510 Hz in various IMC refl (QPD1 larger), and IP (QPD1 larger), 2nd peak 560 Hz in several sensors
M10 1 inch : 490 Hz peaks in several sensors
M18-1 2 inch: large peak at 150 Hz, several peak in 295 and 440 Hz, 640 Hz
M12 2 inch : peak at 330 Hz, 485 Hz(shape different), 550 Hz
M13 1 inch : peak at 550 Hz
M14 1 inch : peak at 510 Hz
M17 2 inch : peak at 210 Hz, 337, 420, 640, 710 Hz
M18-2 2 inch : 337, 350, 550, 680 Hz
M19 2 inch : 365, 395 Hz
M20 2 inch : peaks in 320, 350 and 630 Hz
periscope upper : 450 Hz
periscope lower : 450 Hz
CAL (XPcal)
dan.chen - 7:45 Wednesday 18 March 2026 (36610) Print this report
Pcal-X beam position check

A CAL Tcam session was performed to obtain beam position information necessary for Pcal. The parameters have already been updated, and SDF has been accepted.

Operator: Shingo Hido, Kohei Mitsuhashi, Dan Chen

Update Time: 2026/03/18 07:41:38

EPICS Key Before [mm] After [mm] Δ (After - Before) [mm]
K1:CAL-PCAL_EX_TCAM_PATH1_X 0.07325 mm -0.87474 mm -0.94799 mm
K1:CAL-PCAL_EX_TCAM_PATH1_Y 66.15693 mm 66.37439 mm +0.21746 mm
K1:CAL-PCAL_EX_TCAM_PATH2_X 0.56752 mm -0.37755 mm -0.94507 mm
K1:CAL-PCAL_EX_TCAM_PATH2_Y -68.50339 mm -67.02710 mm +1.47629 mm

 

VIS (SR2)
kenta.tanaka - 0:39 Wednesday 18 March 2026 (36609) Print this report
Modification of rolloff filters of SR2 Payload local control

This work is continued from klog36589

## GAS control modification

According to yesterday's results related to GAS, there seem to be gain peaking at 3-4 Hz. So I reduced {F0,F1,BF} gains to -6dB. After that I measured their OLTFs when SR2 is in the LOCK_ACQUISITION state. Fig.1,2,3 show the results. Unfortunately, OLTF could not be measured well maybe due to the coupling between GAS controls. Acoording to loop suppression (IN2/EXC), their controls seem to work well and the peaking around 3-4Hz becomes smaller. Fig.4 shows the error/feedback spectra. The peaking seems to be disappeared although each residual GAS motion seems to be not changed so much.

## IM DAMP roll off modification

I also modified the roll off filters for IM DAMP {L, P, R, Y}. As for T and V, elliptic filter was also not used in SR3. So I did not touch them in this time. (Maybe it is better to change)

I measured the OLTFs after the modification. Figure.5, 6, 7, and 8 show the OLTFs of IM DAMP {L,P,R,Y}. Also Fig. 9 and 10 show the error/feedback spectra. the feedback above 10 Hz was rolled off successfully.

Images attached to this report
CAL (General)
takahiro.yamamoto - 22:33 Tuesday 17 March 2026 (36608) Print this report
Comment to Inspection of a network trouble between CAL and DMG for Low-Latency data transfer (36588)
Finally, this issue was solved by restoring a vanished VLAN settings on the DMG network switch (klog#36602).
The data transfer for the output of low-latency calibration pipeline was also resumed.
Now, it is nothing more than just a PD dark noise with applying DARM sensing response but the latest LL frame are also available at Kashiwa.
Search Help
×

Warning

×