Re: DEA/Warden staging error


Mike Dalessio
 

Worth noting that the git repo also needs to allow anonymous access. If
it's a private repo, then the 'git clone' is going to fail.

Can you verify that you can download the buildpack from your repo without
authenticating?

On Tue, Sep 15, 2015 at 7:43 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

It's not something we've ever seen before.

In theory, the warden container needs the git binary, which I think it
gets from the cflinuxfs2 stack; and internet access to wherever the git
repo lives.

If the warden container has both of those things, I can't think of any
reason why it wouldn't work.

Joseph
OSS Release Integration Team

On Tue, Sep 15, 2015 at 2:06 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I tried deploying via uploading a buildpack to the CC (had to set up
nginx first, I didnt have it running/configured before) and that worked! So
that's awesome, but I'm not sure what the problem with using a remote
buildpack is. Even with nginx, I still get the exact same error as before
when pushing using a remote buildpack from git.

On Tue, Sep 15, 2015 at 6:57 AM, CF Runtime <cfruntime(a)gmail.com> wrote:

Looking at the logs, we can see it finishing downloading the app
package. The next step should be to download and run the buildpack. Since
you mention there is no output after this, I'm guessing it doesn't get that
far.

It might be having trouble downloading the buildpack from the remote git
url. Could you try uploading the buildpack to Cloud Controller and then
having it use that buildpack to see if that makes a difference?


http://apidocs.cloudfoundry.org/217/buildpacks/creates_an_admin_buildpack.html

http://apidocs.cloudfoundry.org/217/buildpacks/upload_the_bits_for_an_admin_buildpack.html

Joseph
OSS Release Integration Team

On Mon, Sep 14, 2015 at 5:37 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

Here's the full dea_ng and warden debug logs:
https://gist.github.com/MrEnzyme/6dcc74174482ac62c1cf

Are there any other places I should look for logs?

On Mon, Sep 14, 2015 at 8:14 PM, CF Runtime <cfruntime(a)gmail.com>
wrote:

That's not an error we normally get. It's not clear if the
staging_info.yml error is the source of the problem or an artifact of it.
Having more logs would allow us to speculate more.

Joseph & Dan
OSS Release Integration Team

On Mon, Sep 14, 2015 at 2:24 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

I have the cloudfoundry components built, configured and running on
one VM (not in BOSH), and when I push an app I'm getting a generic 'FAILED
StagingError' message after '-----> Downloaded app package (460K)'.

There's nothing in the logs for the dea/warden that seems suspect
other than these 2 things:


{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{
"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}


And I think the second error is just during cleanup, only failing
because the staging process didn't get far enough in to create the
'staging_info.yml'. The one about iomux-link exiting with status 1 is
pretty mysterious though and I have no idea what caused it. Does anyone
know why this might be happening?

Join cf-dev@lists.cloudfoundry.org to automatically receive all group messages.