cf v231: Issue with new webdav blobstore job
Rich Wohlstadter
Hi,
We recently upgraded to cf v231 and switched over from using nfs to the new webdav nginx service. We have one environment where the blobstore is very large. The monit startup script for the blobstore job includes a recursive chown of the blobstore disk (chown -R vcap:vcap $RUN_DIR $LOG_DIR $DATA) which depending on the speed of our storage can sometimes take a long enough time for monit to have issues and try and start it again. The first one will finish, but monit will try and start another one due to the delay and logging will start showing errors binding to port 80 and monit will eventually give up saying execution failed. Does that recursive chown need to be there? I compared the blobstore job to the old debian nfs job and the nfs job just did a chown on the toplevel /var/vcap/store/shared directory. This is causing us issues in this environment whenever we need to update/restart that vm.
Rich
We recently upgraded to cf v231 and switched over from using nfs to the new webdav nginx service. We have one environment where the blobstore is very large. The monit startup script for the blobstore job includes a recursive chown of the blobstore disk (chown -R vcap:vcap $RUN_DIR $LOG_DIR $DATA) which depending on the speed of our storage can sometimes take a long enough time for monit to have issues and try and start it again. The first one will finish, but monit will try and start another one due to the delay and logging will start showing errors binding to port 80 and monit will eventually give up saying execution failed. Does that recursive chown need to be there? I compared the blobstore job to the old debian nfs job and the nfs job just did a chown on the toplevel /var/vcap/store/shared directory. This is causing us issues in this environment whenever we need to update/restart that vm.
Rich