[Ru_ngi] Problems with VOMS servers
Viktor Kotliar
Viktor.Kotliar at ihep.ru
Wed May 19 16:26:32 MSK 2021
Евгений, добрый день.
Я бы проверил настройки доступа к очередям в arc.conf[приложение] и
наличие файлов пул аккаунтов(если с ними сделано) [1].
Удачи!
Виктор
[1]
```
cat /etc/grid-security/pool/pilatl/pool
pilatl01
pilatl02
....
```
19.05.2021 16:02, Yevgeniy пишет:
> Добрый день.
>
> С надеждой, что услышу совет.
>
> Столкнулся несколько дней назад с проблемой
>
> (только у меня или...???) ошибок при работе
>
> с VOMS серверами для всех VOs.
>
> 1. gridftp.log
>
> [2021-05-19 03:09:06] [Arc.JobPlugin] [ERROR] [18656/139812526891072]
> Requested queue lhcb is not allowed for this user
> [2021-05-19 03:09:06] [Arc.GridFTP_Commands] [VERBOSE]
> [18656/139812526891072] response: 451 Requested queue lhcb is not
> allowed for this user\\
>
> 2. [root at ceitep arc]# arcctl deploy voms-lsc lhcb --egi-vo
> [2021-05-19 15:51:30,179] [ARCCTL.ThirdParty.Deploy] [ERROR] [9794]
> [Failed to reach EGI VO Database server. Error: ]
>
> 3. Тут ничего не изменилось -
>
> [root at ceitep arc]# ll /etc/grid-security/pool/lhcb
> total 12
> -rw-r--r--. 1 root root 8 Apr 28 15:27 _DC=ch_DC=cern_OU=Organic
> Units_OU=Users_CN=lbdirac_CN=377643_CN=Robot: LHCb Dirac Service Account
> -rw-r--r--. 1 root root 8 Apr 28 22:32 _DC=ch_DC=cern_OU=Organic
> Units_OU=Users_CN=romanov_CN=427293_CN=Vladimir Romanovskiy
> -rw-r--r--. 1 root root 598 Sep 20 2020 pool
>
> *4. ping voms2.cern.ch - без проблем.*
>
> *[root at ceitep arc]# traceroute voms2.cern.ch
> traceroute to voms2.cern.ch (188.185.89.165), 30 hops max, 60 byte packets
> * 1 gateway (144.206.151.1) 0.296 ms 0.257 ms 0.228 ms
> 2 we.itep.bgp.as59624.net (144.206.254.41) 1.322 ms 1.299 ms 1.281 ms
> 3 e513-e-rjuxm-1-ee2.cern.ch (192.16.155.1) 59.255 ms 59.640 ms
> 59.611 ms
> 4 e773-e-rjuxm-2-ne0.cern.ch (192.65.184.190) 59.312 ms 59.547 ms
> 59.535 ms
> 5 * * *
> ...
> 30 * * *
>
> [root at ceitep arc]#
>
> Похоже моя проблема, но в чем?
>
> Спасибо.
>
> Удачи. Евгений
>
>
-------------- next part --------------
[common]
hostname = ce0004.m45.ihep.su
x509_host_key = /etc/grid-security/hostkey.pem
x509_host_cert = /etc/grid-security/hostcert.pem
x509_cert_dir = /etc/grid-security/certificates
x509_voms_dir = /etc/grid-security/vomsdir
[authgroup:sgmops]
voms = ops * lcgadmin *
[authgroup:pilops]
voms = ops * pilot *
[authgroup:ops]
voms = ops * * *
[authgroup:sgmdtm]
voms = dteam * lcgadmin *
[authgroup:prddtm]
voms = dteam * production *
[authgroup:dteam]
voms = dteam * * *
[authgroup:sgmcms]
voms = cms * lcgadmin *
[authgroup:prdcms]
voms = cms * production *
[authgroup:pricms]
voms = cms * pilot *
voms = cms * priorityuser *
[authgroup:cms]
voms = cms * * *
[authgroup:sgmali]
voms = alice * lcgadmin *
[authgroup:prdali]
voms = alice * production *
[authgroup:pilali]
voms = alice * pilot *
[authgroup:alice]
voms = alice * * *
[authgroup:sgmlhb]
voms = lhcb * lcgadmin *
[authgroup:prdlhb]
voms = lhcb * production *
[authgroup:pillhb]
voms = lhcb * pilot *
[authgroup:lhcb]
voms = lhcb * * *
[authgroup:sgmatl]
voms = atlas * lcgadmin *
[authgroup:prdatl]
voms = atlas * production *
[authgroup:pilatl]
voms = atlas * pilot *
[authgroup:atlas]
voms = atlas * * *
[mapping]
map_to_pool = smgops /etc/grid-security/pool/sgmops
map_to_pool = pilops /etc/grid-security/pool/pilops
map_to_pool = ops /etc/grid-security/pool/ops
map_to_pool = smgdtm /etc/grid-security/pool/sgmdtm
map_to_pool = prddtm /etc/grid-security/pool/prddtm
map_to_pool = dteam /etc/grid-security/pool/dteam
map_to_user = sgmcms sgmcms01
map_to_pool = prdcms /etc/grid-security/pool/prdcms
map_to_pool = pricms /etc/grid-security/pool/pricms
map_to_pool = cms /etc/grid-security/pool/cms
map_to_pool = sgmali /etc/grid-security/pool/sgmali
map_to_pool = prdali /etc/grid-security/pool/prdali
map_to_pool = pilali /etc/grid-security/pool/pilali
map_to_pool = alice /etc/grid-security/pool/alice
map_to_pool = sgmlhb /etc/grid-security/pool/sgmlhb
map_to_pool = prdlhb /etc/grid-security/pool/prdlhb
map_to_pool = pillhb /etc/grid-security/pool/pillhb
map_to_pool = lhcb /etc/grid-security/pool/lhcb
map_to_pool = sgmatl /etc/grid-security/pool/sgmatl
map_to_pool = prdatl /etc/grid-security/pool/prdatl
map_to_pool = pilatl /etc/grid-security/pool/pilatl
map_to_pool = atlas /etc/grid-security/pool/atlas
[lrms]
lrms = pbs
pbs_bin_path = /opt/pbs_tcl/torque/bin
[arex]
x509_host_key = /etc/grid-security/hostkey.pem
x509_host_cert = /etc/grid-security/hostcert.pem
delegationdb = sqlite
watchdog = no
loglevel = 5
joblog = /var/log/arc/arex-jobs.log
controldir = /var/spool/arc/jobstatus
sessiondir = /var/spool/arc/sessiondir
defaultttl = 604800 1296000
shared_filesystem = no
scratchdir = /scratch
mail = lcg at ihep.ru
maxrerun = 3
wakeupperiod = 64
infoproviders_timelimit = 10800
[arex/ws]
wsurl = https://ce0004.m45.ihep.su:443/arex
max_job_control_requests = 1024
max_infosys_requests = 32
max_data_transfer_requests = 1024
[arex/ws/jobs]
allownew = yes
allowaccess = sgmops
allowaccess = pilops
allowaccess = ops
allowaccess = sgmdtm
allowaccess = prddtm
allowaccess = dteam
allowaccess = sgmcms
allowaccess = prdcms
allowaccess = pricms
allowaccess = cms
allowaccess = sgmali
allowaccess = prdali
allowaccess = pilali
allowaccess = alice
allowaccess = sgmlhb
allowaccess = prdlhb
allowaccess = pillhb
allowaccess = lhcb
allowaccess = sgmatl
allowaccess = prdatl
allowaccess = pilatl
allowaccess = atlas
maxjobdesc = 5242880
[arex/data-staging]
logfile = /var/log/arc/datastaging.log
loglevel = 5
usehostcert = yes
maxtransfertries = 10
passivetransfer = no
globus_tcp_port_range = 9000,14000
globus_udp_port_range = 9000,14000
httpgetpartial = no
speedcontrol = 0 300 100 300
maxdelivery = 256
maxprocessor = 32
maxemergency = 16
maxprepared = 512
[arex/cache]
cachedir = /var/spool/arc/cache
[arex/cache/cleaner]
calculatesize = cachedir
logfile = /var/log/arc/cache-cleaner.log
cachesize = 80 60
loglevel = 5
cachelifetime = 14d
cachecleantimeout = 10000
[arex/jura]
loglevel = 5
[arex/jura/apel:wlcg]
targeturl = https://broker-prod1.argo.grnet.gr:6162/
use_ssl = yes
gocdb_name = RU-Protvino-IHEP
urbatchsize = 1024
urdelivery_frequency = 86400
vofilter = ops
vofilter = dteam
vofilter = alice
vofilter = cms
vofilter = atlas
vofilter = lhcb
[gridftpd]
loglevel = 5
port = 2811
globus_tcp_port_range = 9000,12000
maxconnections = 256
[gridftpd/jobs]
allownew = yes
allowaccess = sgmops
allowaccess = pilops
allowaccess = ops
allowaccess = sgmdtm
allowaccess = prddtm
allowaccess = dteam
allowaccess = sgmcms
allowaccess = prdcms
allowaccess = pricms
allowaccess = cms
allowaccess = sgmali
allowaccess = prdali
allowaccess = pilali
allowaccess = alice
allowaccess = sgmlhb
allowaccess = prdlhb
allowaccess = pillhb
allowaccess = lhcb
allowaccess = sgmatl
allowaccess = prdatl
allowaccess = pilatl
allowaccess = atlas
[infosys]
loglevel = 5
hostname = ce0004.m45.ihep.su
slapd_loglevel = 5
bdii_debug_level = DEBUG
[infosys/nordugrid]
[infosys/ldap]
hostname = ce0004.m45.ihep.su
slapd_hostnamebind = *
port = 2135
user = ldap
timelimit = 1800
[infosys/glue1]
resource_location = Protvino, Russia
resource_latitude = 55.50
resource_longitude = 37.37
cpu_scaling_reference_si00 = 2732
glue_site_web = http://www.ihep.su
glue_site_unique_id = RU-Protvino-IHEP
processor_other_description = Benchmark=8.8-HEP-SPEC06
[infosys/glue2]
admindomain_name = RU-Protvino-IHEP
admindomain_www = http://www.ihep.su
admindomain_owner = lcg at ihep.ru
computingservice_qualitylevel = production
admindomain_description = Institute for High Energy Physics named by A.A. Logunov of National Research Centre Kurchatov Institute
admindomain_distributed = no
admindomain_otherinfo = GLUE2ExecutionEnvironmentMainMemorySize=2048
[infosys/glue2/ldap]
showactivities = no
[infosys/cluster]
alias = RU-Protvino-IHEP
hostname = ce0004.m45.ihep.su
comment = TIER 2
cluster_location = RU-142280
cluster_owner = ITSW group
cluster_owner = OMBT
cluster_owner = IHEP
advertisedvo = ops
advertisedvo = dteam
advertisedvo = atlas
advertisedvo = alice
advertisedvo = cms
advertisedvo = lhcb
clustersupport = lcg at ihep.ru
architecture = x86_64
opsys = CentOS-7.7
nodecpu = Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
cpudistribution = 16cpu:1,40cpu:6,4cpu:9,8cpu:26,24cpu:106
maxcputime = 216000
maxwalltime = 259200
nodememory = 2048
homogeneity = true
nodeaccess = outbound
[queue:alice]
allowaccess = sgmali
allowaccess = prdali
allowaccess = pilali
allowaccess = alice
advertisedvo = alice
benchmark = HEPSPEC 8.8
maxwalltime = 172800
maxcputime = 129600
[queue:ops]
allowaccess = sgmops
allowaccess = pilops
allowaccess = ops
advertisedvo = ops
benchmark = HEPSPEC 8.8
maxwalltime = 3600
maxcputime = 3600
[queue:cms]
allowaccess = sgmcms
allowaccess = prdcms
allowaccess = pricms
allowaccess = cms
advertisedvo = cms
benchmark = HEPSPEC 8.8
maxwalltime = 259200
maxcputime = 216000
[queue:lhcb]
allowaccess = sgmlhb
allowaccess = prdlhb
allowaccess = pillhb
allowaccess = lhcb
advertisedvo = lhcb
benchmark = HEPSPEC 8.8
maxwalltime = 259200
maxcputime = 216000
maxslotsperjob = 1
[queue:atlas]
allowaccess = sgmatl
allowaccess = prdatl
allowaccess = pilatl
allowaccess = atlas
advertisedvo = atlas
benchmark = HEPSPEC 8.8
maxwalltime = 345600
maxcputime = 302400
[queue:cmsmc]
allowaccess = sgmcms
allowaccess = prdcms
allowaccess = pricms
allowaccess = cms
advertisedvo = cms
benchmark = HEPSPEC 8.8
maxwalltime = 345600
maxcputime = 2764800
More information about the Ru_ngi
mailing list