Re: dfuse mount error in centos 7
Groot
OK, thanks. And I want to know if I must mount the dfuse before using the libioil interception library?
Thanks a lot. Groot |
|
Re: dfuse mount error in centos 7
Hennecke, Michael
Hi,
please make sure that you are on CentOS 7.9, your kernel level indicates that you may be on an older level?
Best, Michael
From: daos@daos.groups.io <daos@daos.groups.io>
On Behalf Of Groot
Sent: Saturday, 9 April 2022 16:57 To: daos@daos.groups.io Subject: [daos] dfuse mount error in centos 7
I create the pool and container with POSIX type successfully. But I mount the dfs by dfuse failed and I get the error message as follow: Intel Deutschland GmbH |
|
dfuse mount error in centos 7
Groot
I create the pool and container with POSIX type successfully. But I mount the dfs by dfuse failed and I get the error message as follow:
fuse: error: filesystem requested capabilities 0x10000 that are not supported by kernel, aborting. My system is Centos 7 and the kernel is 3.10.0-957. I install the daos services by the yum repo. Thanks a lot. |
|
DAOS Community Update / Apr'22
Lombardi, Johann
Hi there,
Please find below the DAOS community newsletter for April 2022.
Past Events
Upcoming Events
Release
R&D
News
See https://events.linuxfoundation.org/sodacode/ for more information.
--------------------------------------------------------------------- This e-mail and any attachments may contain confidential material for |
|
Re: dfs_lookup behavior for non-existent files?
Tuffli, Chuck
Mohamad
Thank you for the sanity check regarding dfs_lookup. After a little sleuthing, the application (evidently) was modifying the effective UID/GID around the time of that lookup. And it was this *ID change that made networking fail. With those calls changed, DFS
is now doing what I expected/thought/hoped
🙂
--chuck
From: daos@daos.groups.io <daos@daos.groups.io> on behalf of Chaarawi, Mohamad <mohamad.chaarawi@...>
Sent: Tuesday, April 5, 2022 5:21 AM To: daos@daos.groups.io <daos@daos.groups.io> Subject: Re: [daos] dfs_lookup behavior for non-existent files? Hi Chuck,
Neither dfs_lookup nor dfs_stat do set the st_ino in the stat buf. The reason being is that files are uniquely identified by the daos object ID which is 128 bits (64 hi, 64 lo). You can retrieve that using dfs_obj2id(): https://github.com/daos-stack/daos/blob/master/src/include/daos_fs.h#L316
now for the other error, that seems weird. The errors are coming from the network layer. At that point, are there any servers that are down or were killed (specifically the engine with rank 1)? This would explain the errors. When I try this myself, I get ENOENT for lookup on “//.Trash” as expected.
Thanks, Mohamad
From:
daos@daos.groups.io <daos@daos.groups.io> on behalf of Tuffli, Chuck <chuck.tuffli@...> I'm porting an existing application to use DFS (DAOS v2.0.2) instead of POSIX and need help understanding the error messages printed to the console.
The code is using dfs_lookup() to retrieve the struct stat of a file. Note the implementation cannot use dfs_stat() as it requires valid values for fields such as st_ino that dfs_stat() does not provide. The code in question is:
int d_lstat(const char * restrict path, struct stat * restrict sb) { int rc; dfs_obj_t *obj = NULL;
rc = dfs_lookup(dfs, path, O_RDONLY, &obj, NULL, sb); ...
If the file path exists (e.g. "/"), this works. But if the path, doesn't exist (e.g. "//.Trash"), the call to dfs_lookup() does not return. Instead, the console endlessly prints messages like:
04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] external ERR # [6937851.329315] mercury->msg: [error] /builddir/build/BUILD/mercury-2.1.0rc4/src/na/na_ofi.c:2972 # na_ofi_msg_send(): fi_tsend() failed, rc: -13 (Permission denied) 04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] external ERR # [6937851.329374] mercury->hg: [error] /builddir/build/BUILD/mercury-2.1.0rc4/src/mercury_core.c:2727 # hg_core_forward_na(): Could not post send for input buffer (NA_ACCESS) 04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] hg ERR src/cart/crt_hg.c:1104 crt_hg_req_send_cb(0x1d0cd40) [opc=0x4070001 (DAOS) rpcid=0x63f8133700000008 rank:tag=1:2] RPC failed; rc: DER_HG(-1020): 'Transport layer mercury error' 04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] object ERR src/object/cli_shard.c:889 dc_rw_cb() RPC 1 failed, DER_HG(-1020): 'Transport layer mercury error'
Am I mis-using dfs_lookup() or using it incorrectly?
--chuck |
|
Re: dfs_lookup behavior for non-existent files?
Chaarawi, Mohamad
Hi Chuck,
Neither dfs_lookup nor dfs_stat do set the st_ino in the stat buf. The reason being is that files are uniquely identified by the daos object ID which is 128 bits (64 hi, 64 lo). You can retrieve that using dfs_obj2id(): https://github.com/daos-stack/daos/blob/master/src/include/daos_fs.h#L316
now for the other error, that seems weird. The errors are coming from the network layer. At that point, are there any servers that are down or were killed (specifically the engine with rank 1)? This would explain the errors. When I try this myself, I get ENOENT for lookup on “//.Trash” as expected.
Thanks, Mohamad
From:
daos@daos.groups.io <daos@daos.groups.io> on behalf of Tuffli, Chuck <chuck.tuffli@...> I'm porting an existing application to use DFS (DAOS v2.0.2) instead of POSIX and need help understanding the error messages printed to the console.
The code is using dfs_lookup() to retrieve the struct stat of a file. Note the implementation cannot use dfs_stat() as it requires valid values for fields such as st_ino that dfs_stat() does not provide. The code in question is:
int d_lstat(const char * restrict path, struct stat * restrict sb) { int rc; dfs_obj_t *obj = NULL;
rc = dfs_lookup(dfs, path, O_RDONLY, &obj, NULL, sb); ...
If the file path exists (e.g. "/"), this works. But if the path, doesn't exist (e.g. "//.Trash"), the call to dfs_lookup() does not return. Instead, the console endlessly prints messages like:
04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] external ERR # [6937851.329315] mercury->msg: [error] /builddir/build/BUILD/mercury-2.1.0rc4/src/na/na_ofi.c:2972 # na_ofi_msg_send(): fi_tsend() failed, rc: -13 (Permission denied) 04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] external ERR # [6937851.329374] mercury->hg: [error] /builddir/build/BUILD/mercury-2.1.0rc4/src/mercury_core.c:2727 # hg_core_forward_na(): Could not post send for input buffer (NA_ACCESS) 04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] hg ERR src/cart/crt_hg.c:1104 crt_hg_req_send_cb(0x1d0cd40) [opc=0x4070001 (DAOS) rpcid=0x63f8133700000008 rank:tag=1:2] RPC failed; rc: DER_HG(-1020): 'Transport layer mercury error' 04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] object ERR src/object/cli_shard.c:889 dc_rw_cb() RPC 1 failed, DER_HG(-1020): 'Transport layer mercury error'
Am I mis-using dfs_lookup() or using it incorrectly?
--chuck |
|
Re: What will happen to DAOS if all SCM space is consumed?
Lombardi, Johann
Hi there,
Applications will get a DER_NOSPACE error. We currently don’t support serialization of metadata to SSDs.
Cheers, Johann
From:
<daos@daos.groups.io> on behalf of "bob@..." <bob@...>
Hi all --------------------------------------------------------------------- This e-mail and any attachments may contain confidential material for |
|
Re: the fault domain setting of the daos container
Lombardi, Johann
Hi there,
There is a gap in the support of rf_lvl which is not used by the placement algorithm yet. Please see https://daosio.atlassian.net/browse/DAOS-10215 to track progress on this.
Cheers, Johann
From:
<daos@daos.groups.io> on behalf of "dagouxiong2015@..." <dagouxiong2015@...>
Hello everyone: Recently, I am studying the fault domain setting of the daos container, hoping to deploy multiple engines on a single physical node, and the data of the object is distributed on different physical nodes , prevent the failure of a single physical node to cause data loss. But when I configure [root@server-1 ~]# daos cont create pool --label cont12 --type POSIX --properties rf:1 --properties rf_lvl:0 ERROR: daos: "rf_lvl" is not a settable property (valid: cksum,cksum_size,compression,dedup,dedup_threshold,ec_cell,encryption,label,rf,srv_cksum,status)
/** * Level of fault-domain to use for object allocation * rank is hardcoded to 1, [2-254] are defined by the admin */ enum { DAOS_PROP_CO_REDUN_MIN = 1, DAOS_PROP_CO_REDUN_RANK = 1, /** hard-coded */ DAOS_PROP_CO_REDUN_MAX = 254, };
In the current situation, if you want data redundancy on different physical nodes, do you have any good suggestions? Does daos plan to support configurable in fault domain in the future?
Best regards! --------------------------------------------------------------------- This e-mail and any attachments may contain confidential material for |
|
dfs_lookup behavior for non-existent files?
Tuffli, Chuck
I'm porting an existing application to use DFS (DAOS v2.0.2) instead of POSIX and need help understanding the error messages printed to the console.
The code is using dfs_lookup() to retrieve the struct stat of a file. Note the implementation cannot use dfs_stat() as it requires valid values for fields such as st_ino that dfs_stat() does not provide. The code in question is:
int
d_lstat(const char * restrict path, struct stat * restrict sb)
{
int rc;
dfs_obj_t *obj = NULL;
...
If the file path exists (e.g. "/"), this works. But if the path, doesn't exist (e.g. "//.Trash"), the call to dfs_lookup() does not return. Instead, the console endlessly prints messages like:
04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] external ERR # [6937851.329315] mercury->msg: [error] /builddir/build/BUILD/mercury-2.1.0rc4/src/na/na_ofi.c:2972
# na_ofi_msg_send(): fi_tsend() failed, rc: -13 (Permission denied)
04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] external ERR # [6937851.329374] mercury->hg: [error] /builddir/build/BUILD/mercury-2.1.0rc4/src/mercury_core.c:2727
# hg_core_forward_na(): Could not post send for input buffer (NA_ACCESS)
04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] hg ERR src/cart/crt_hg.c:1104 crt_hg_req_send_cb(0x1d0cd40) [opc=0x4070001 (DAOS) rpcid=0x63f8133700000008 rank:tag=1:2] RPC failed; rc: DER_HG(-1020): 'Transport layer mercury error'
04/04-16:28:50.90 xxxxx DAOS[1178648/1178648/0] object ERR src/object/cli_shard.c:889 dc_rw_cb() RPC 1 failed, DER_HG(-1020): 'Transport layer mercury error'
Am I mis-using dfs_lookup() or using it incorrectly?
--chuck
|
|
What will happen to DAOS if all SCM space is consumed?
bob@...
Hi all
What will happen to DAOS when the meta data and small I/Os have consumed all the SCM capacity meanwhile the SSDs still have enough space to hold data ? what is the strategy ? Does it move some 4K values or meta data (keys) to the SSDs and then reclaim their space for incoming data , or just simply stop accepting I/O request? Regards |
|
the fault domain setting of the daos container
dagouxiong2015@...
Hello everyone:
Recently, I am studying the fault domain setting of the daos container, hoping to deploy multiple engines on a single physical node,
and the data of the object is distributed on different physical nodes , prevent the failure of a single physical node to cause data loss.
But when I configure
[root@server-1 ~]# daos cont create pool --label cont12 --type POSIX --properties rf:1 --properties rf_lvl:0
ERROR: daos: "rf_lvl" is not a settable property (valid: cksum,cksum_size,compression,dedup,dedup_threshold,ec_cell,encryption,label,rf,srv_cksum,status)
Code analysis can only support one fault domain (by rank) layout. /**
* Level of fault-domain to use for object allocation
* rank is hardcoded to 1, [2-254] are defined by the admin
*/
enum {
DAOS_PROP_CO_REDUN_MIN = 1,
DAOS_PROP_CO_REDUN_RANK = 1, /** hard-coded */
DAOS_PROP_CO_REDUN_MAX = 254,
};
In the current situation, if you want data redundancy on different physical nodes, do you have any good suggestions?
Does daos plan to support configurable in fault domain in the future?
Best regards!
|
|
Announcement: DAOS 2.0.2 is generally available
Prantis, Kelsey
All,
We are pleased to announce that DAOS 2.0.2 release is now generally available. Notable changes in this maintenance release includes following updates on top of DAOS 2.0.1:
There are a number of resources available for the release:
As always, feel free to use this mailing list for any issues you may find with the release, or our JIRA bug tracking system, available at https://daosio.atlassian.net/jira, or on our Slack channel, available at https://daos-stack.slack.com.
Regards,
Kelsey Prantis Senior Software Engineering Manager Super Compute Storage Architecture and Development Division Intel
|
|
Re: High latency in metada write
shadow_vector@...
Hi Liang:
Is there any result of array write test ? Is there something wrong with my test? Best Regards! |
|
Re: Jenkins test
Murrell, Brian
On Wed, 2022-03-02 at 17:29 -0800, dongfeier wrote:
Scripts not permitted to use staticMethodUltimately this means that some code in an untrusted shared library is trying to access a non-whitelisted groovy function. Administrators can decide whether to approve or reject thisYou *could* do the above with the security implications it involves, but the correct solution is to use whitelisted methods. Error when executing unsuccessful post condition:This is the method that is not whitelisted. atAnd this is where it's being called from. It's here: https://github.com/daos-stack/pipeline-lib/blob/03a6dd8f16808094e2ba2971e839707cd690c0a5/vars/notifyBrokenBranch.groovy#L37 It's the use of env[] that is the problem. One solution here is to move that function to the trusted library at: https://github.com/daos-stack/trusted-pipeline-lib But it seems a more correct solution is to replace the env[NAME] accesses to env."NAME" such as this (completely untested) PR does: https://github.com/daos-stack/pipeline-lib/pull/291 Cheers, b. |
|
Re: Jenkins test
Pittman, Ashley M
I suspect that this is because you haven’t configured Jenkins to use a build user and it’s building your code as root. Our dockerfiles use the uid of the caller to own the files so that Jenkins can copy files in/out of the container, and we didn’t think about the case that docker would be run as root. In theory we could probably workaround this in the dockerfiles but I recommend you look at your Jenkins config first.
Ashley.
From: daos@daos.groups.io <daos@daos.groups.io> on behalf of dongfeier <15735154041@...> [Edited Message Follows] Hello, useradd: UID 0 is not unique
The command '/bin/sh -c useradd --no-log-init --uid $UID --user-group --create-home --shell /bin/bash --home /home/daos daos_server' returned a non-zero code: 4
Jenkins not configured to notify users of failed builds.
Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getAt java.lang.Object java.lang.String. Administrators can decide whether to approve or reject this signature. Error when executing unsuccessful post condition: org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getAt java.lang.Object java.lang.String at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:279) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetArray(SandboxInterceptor.java:476) at org.kohsuke.groovy.sandbox.impl.Checker$11.call(Checker.java:484) at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetArray(Checker.java:489) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getArray(SandboxInvoker.java:45) at com.cloudbees.groovy.cps.impl.ArrayAccessBlock.rawGet(ArrayAccessBlock.java:21) at notifyBrokenBranch.call(notifyBrokenBranch.groovy:37) at WorkflowScript.run(WorkflowScript:1049) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.delegateAndExecute(ModelInterpreter.groovy:137) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy:761) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.catchRequiredContextForNode(ModelInterpreter.groovy:395) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.catchRequiredContextForNode(ModelInterpreter.groovy:393) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy:760) at com.cloudbees.groovy.cps.CpsDefaultGroovyMethods.each(CpsDefaultGroovyMethods:2030) at com.cloudbees.groovy.cps.CpsDefaultGroovyMethods.each(CpsDefaultGroovyMethods:2015) at com.cloudbees.groovy.cps.CpsDefaultGroovyMethods.each(CpsDefaultGroovyMethods:2056) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy:750) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.executePostBuild(ModelInterpreter.groovy:728) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74) at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30) at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66) at sun.reflect.GeneratedMethodAccessor430.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:402) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:96) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:314) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:278) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) I have found the cause of the problem because the uid is repeated when adding users. Thank you very much |
|
Re: Storage Usage
d.korekovcev@...
If I manually remove hosts from conf |
|
Re: Storage Usage
This should probably return a range in this case i.e. n04p00[1-9], I will create a ticket.
Regards, Tom
From: daos@daos.groups.io <daos@daos.groups.io>
On Behalf Of d.korekovcev@...
Sent: Thursday, March 3, 2022 12:13 PM To: daos@daos.groups.io Subject: [daos] Storage Usage
Hi all! dmg storage query usage return error argument list too long |
|
Storage Usage
d.korekovcev@...
Hi all!
dmg storage query usage return error argument list too long
|
|
Re: Jenkins test
Hello,
useradd: UID 0 is not unique
The command '/bin/sh -c useradd --no-log-init --uid $UID --user-group --create-home --shell /bin/bash --home /home/daos daos_server' returned a non-zero code: 4
Jenkins not configured to notify users of failed builds. Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getAt java.lang.Object java.lang.String. Administrators can decide whether to approve or reject this signature. Error when executing unsuccessful post condition: org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getAt java.lang.Object java.lang.String at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:279) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetArray(SandboxInterceptor.java:476) at org.kohsuke.groovy.sandbox.impl.Checker$11.call(Checker.java:484) at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetArray(Checker.java:489) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getArray(SandboxInvoker.java:45) at com.cloudbees.groovy.cps.impl.ArrayAccessBlock.rawGet(ArrayAccessBlock.java:21) at notifyBrokenBranch.call(notifyBrokenBranch.groovy:37) at WorkflowScript.run(WorkflowScript:1049) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.delegateAndExecute(ModelInterpreter.groovy:137) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy:761) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.catchRequiredContextForNode(ModelInterpreter.groovy:395) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.catchRequiredContextForNode(ModelInterpreter.groovy:393) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy:760) at com.cloudbees.groovy.cps.CpsDefaultGroovyMethods.each(CpsDefaultGroovyMethods:2030) at com.cloudbees.groovy.cps.CpsDefaultGroovyMethods.each(CpsDefaultGroovyMethods:2015) at com.cloudbees.groovy.cps.CpsDefaultGroovyMethods.each(CpsDefaultGroovyMethods:2056) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy:750) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.runPostConditions(ModelInterpreter.groovy) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.executePostBuild(ModelInterpreter.groovy:728) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74) at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30) at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66) at sun.reflect.GeneratedMethodAccessor430.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:402) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:96) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:314) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:278) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) I have found the cause of the problem because the uid is repeated when adding users. Thank you very much |
|
DAOS Community Update / Mar'22
Lombardi, Johann
Hi there,
Please find below the DAOS community newsletter for March 2022.
Past Events (February)
Upcoming Events
Release
R&D
News
See https://events.linuxfoundation.org/sodacode/ for more information.
--------------------------------------------------------------------- This e-mail and any attachments may contain confidential material for |
|