[ACCEPTED]-ssh-agent and crontab -- is there a good way to get these to meet?-ssh-keys

Accepted answer
Score: 37

In addition...

If your key have a passhphrase, keychain 8 will ask you once (valid until you reboot 7 the machine or kill the ssh-agent).

keychain 6 is what you need! Just install it and add 5 the follow code in your .bash_profile:

keychain ~/.ssh/id_dsa

So 4 use the code below in your script to load 3 the ssh-agent environment variables:

. ~/.keychain/$HOSTNAME-sh

Note: keychain 2 also generates code to csh and fish shells.

Copied 1 answer from https://serverfault.com/questions/92683/execute-rsync-command-over-ssh-with-an-ssh-agent-via-crontab

Score: 23

When you run ssh-agent -s, it launches a 8 background process that you'll need to kill 7 later. So, the minimum is to change your 6 hack to something like:

eval `ssh-agent -s` 
svn stuff
kill $SSH_AGENT_PID

However, I don't 5 understand how this hack is working. Simply 4 running an agent without also running ssh-add 3 will not load any keys. Perhaps MacOS' ssh-agent 2 is behaving differently than its manual page says it 1 does.

Score: 10

I had a similar problem. My script (that 11 relied upon ssh keys) worked when I ran 10 it manually but failed when run with crontab.

Manually 9 defining the appropriate key with

ssh -i /path/to/key

didn't 8 work.

But eventually I found out that the 7 SSH_AUTH_SOCK was empty when the crontab 6 was running SSH. I wasn't exactly sure why, but 5 I just

env | grep SSH

copied the returned value and added 4 this definition to the head of my crontab.

SSH_AUTH_SOCK="/tmp/value-you-get-from-above-command"

I'm 3 out of depth as to what's happening here, but 2 it fixed my problem. The crontab runs smoothly 1 now.

Score: 7

One way to recover the pid and socket of 5 running ssh-agent would be.

SSH_AGENT_PID=`pgrep -U $USER ssh-agent`
for PID in $SSH_AGENT_PID; do
    let "FPID = $PID - 1"
    FILE=`find /tmp -path "*ssh*" -type s -iname "agent.$FPID"`
    export SSH_AGENT_PID="$PID" 
    export SSH_AUTH_SOCK="$FILE"
done

This of course 4 presumes that you have pgrep installed in 3 the system and there is only one ssh-agent 2 running or in case of multiple ones it will 1 take the one which pgrep finds last.

Score: 7

My solution - based on pra's - slightly 3 improved to kill process even on script 2 failure:

eval `ssh-agent`
function cleanup {
    /bin/kill $SSH_AGENT_PID
}
trap cleanup EXIT
ssh-add
svn-stuff

Note that I must call ssh-add on 1 my machine (scientific linux 6).

Score: 5

To set up automated processes without automated 12 password/passphrase hacks, I use a separate 11 IdentityFile that has no passphrase, and 10 restrict the target machines' authorized_keys 9 entries prefixed with from="automated.machine.com" ... etc..

I created a 8 public-private keyset for the sending machine 7 without a passphrase:

ssh-keygen -f .ssh/id_localAuto

(Hit return when prompted 6 for a passphrase)

I set up a remoteAuto Host 5 entry in .ssh/config:

Host remoteAuto
    HostName remote.machine.edu
    IdentityFile  ~/.ssh/id_localAuto

and the remote.machine.edu:.ssh/authorized_keys 4 with:

...
from="192.168.1.777" ssh-rsa ABCDEFGabcdefg....
...

Then ssh doesn't need the externally 3 authenticated authorization provided by 2 ssh-agent or keychain, so you can use commands 1 like:

scp -p remoteAuto:watchdog ./watchdog_remote
rsync -Ca remoteAuto/stuff/* remote_mirror
svn svn+ssh://remoteAuto/path
svn update
... 
Score: 4

Assuming that you already configured SSH 17 settings and that script works fine from 16 terminal, using the keychain is definitely the easiest 15 way to ensure that script works fine in 14 crontab as well.

Since keychain is not included 13 in most of Unix/Linux derivations, here 12 is the step by step procedure.

1. Download 11 the appropriate rpm package depending on 10 your OS version from http://pkgs.repoforge.org/keychain/. Example for CentOS 9 6:

wget http://pkgs.repoforge.org/keychain/keychain-2.7.0-1.el6.rf.noarch.rpm

2. Install the package:

sudo rpm -Uvh keychain-2.7.0-1.el6.rf.noarch.rpm

3. Generate keychain 8 files for your SSH key, they will be located 7 in ~/.keychain directory. Example for id_rsa:

keychain ~/.ssh/id_rsa

4. Add 6 the following line to your script anywhere 5 before the first command that is using SSH 4 authentication:

source ~/.keychain/$HOSTNAME-sh

I personally tried to avoid 3 to use additional programs for this, but 2 everything else I tried didn't work. And 1 this worked just fine.

Score: 4

Inspired by some of the other answers here 3 (particularly vpk's) I came up with the 2 following crontab entry, which doesn't require 1 an external script:

PATH=/usr/bin:/bin:/usr/sbin:/sbin

* * * * *   SSH_AUTH_SOCK=$(lsof -a -p $(pgrep ssh-agent) -U -F n | sed -n 's/^n//p') ssh hostname remote-command-here
Score: 2

Here is a solution that will work if you 8 can't use keychain and if you can't start 7 an ssh-agent from your script (for example, because 6 your key is passphrase-protected).

Run this 5 once:

nohup ssh-agent > .ssh-agent-file &
. ssh-agent-file
ssh-add  # you'd enter your passphrase here

In the script you are running from 4 cron:

# start of script
. ${HOME}/.ssh-agent-file
# now your key is available

Of course this allows anyone who can 3 read '~/.ssh-agent-file' and the corresponding 2 socket to use your ssh credentials, so use 1 with caution in any multi-user environment.

Score: 2

Your solution works but it will spawn a 5 new agent process every time as already 4 indicated by some other answer.

I faced similar 3 issues and I found this blogpost useful as well 2 as the shell script by Wayne Walker mentioned 1 in the blog on github.

Good luck!

Score: 2

Not enough reputation to comment on @markshep's 14 answer, just wanted to add a simpler solution. lsof was 13 not listing the socket for me without sudo, but 12 find is enough:

* * * * * SSH_AUTH_SOCK="$(find /tmp/ -type s -path '/tmp/ssh-*/agent.*' -user $(whoami) 2>/dev/null)" ssh-command

The find command searches the /tmp directory 11 for sockets whose full path name matches 10 that of ssh agent socket files and are owned 9 by the current user. It redirects stderr to /dev/null to 8 ignore the many permission denied errors 7 that will usually be produced by running 6 find on directories that it doesn't have access 5 to.

The solution assumes only one socket 4 will be found for that user.

The target and 3 path match might need modification for other 2 distributions/ssh versions/configurations, should 1 be straightforward though.

More Related questions