Here is a supplemental comment.
The condor job was submitted yesterday evening on the test server. To test the stability of the process, we left it all the night.
This morning, the condor job was in the hold state with the error message. Here is the error message in ~/public_html/summary/log/gw_daily_summary.log:
012 (093.000.000) 2024-10-30 07:03:52 Job was held.
Job has gone over cgroup memory limit of 0 megabytes. Peak usage: 0 megabytes. Consider resubmitting with a higher request_memory.
Code 34 Subcode 0
The memory of the test server was 8 GB. We need much more memory to run the process of the summary page stably. Note that the memory in the k1sum0 (the current summary page is running there) is 128 GB.
We discussed the possibility of using a computer to test the summary page.
The k1det0 will probably be used to test the summary page by replacing the hard disk, after discussing yamaT-san.
The environmental variable to identify the NDS server on the workstation is set and named NDSSERVER or NDS2SERVER. We didn't set these variables when we tested the summary page on the test server. However, the process runs and succeeds in reading the frame data. It's better to make clear how to read the frame data in gwsumm.
I understood how to read frame data in gwsumm. In the configuration file of k1global, there is a setting option to set the nds host and port number. We could read the frame data because the k1nds0 is set correctly in this ini file.
On the other hand, I tried to read the past frame data by setting k1nds2 and proper port number. When I read 32 channels at the same time, the process failed with the following error:
RuntimeError: Low level daq error occured [22]: Too many channels or too much data requested.
Thank you for the comment. I understand your thoughts.
When I appropriately set the environment variable (LIGO_DATAFIND_SERVER), we could read the data via the GWDataFind server! So, I think there is no CPU load on k1nds[01] as previously. In addition, we succeeded in making the summary page at the detchar cluster! (without HTcondor) Today, Oshino-san and I finished the HTcondor test at the detchar cluster and improved the configuration to use multiple CPUs at the same time.
Tomorrow, we will try running the gwsumm using the HT condor with multiple CPUs.