Hi, for the life of me I cannot find the crashplan log file location. Can someone please point me to it?
Crashplan has basically hung at 51% backed up and I am trying to figure out why.
I’m using a 5N in case it matters.
Thanks.
Hi, for the life of me I cannot find the crashplan log file location. Can someone please point me to it?
Crashplan has basically hung at 51% backed up and I am trying to figure out why.
I’m using a 5N in case it matters.
Thanks.
hi, am not 100% sure but it might be in the crashplan/app folders?
how much are you trying to backup by the way? i think there were some limitations regarding how much data is being backed up each session, as some other users had to break up the backup sets into smaller ones due to memory limits.[hr]
i think i just found it
http://www.drobospace.com/forums/showthread.php?tid=141634&pid=189794#pid189794
Thanks Paul,
I’m pretty sure it’s failing as I am trying to back up a lot of files however I would like to confirm via the log files.
The location for the log files is /tmp/DroboApps/crashplan/ but I still am struggling to actually find the location. Am I right to assume the only way to access this location is via the command line (SSH)? I’ve installed Sudo and OpenSSH.
While i havent tried Crashplan on a 5N, i can tell you from experience with both the desktop version and version for other NASes that if you are backing up lots of files - it needs quite a lot of RAM… which i dont think the 5N has
hi alanant, from ricardos linked post, it seems it to be, but probably best to wait for actual logfile location just in case it is somewhere else.
while ive learnt to trust docchris indeed, it does make me wonder why crashplan would really need so much memory in general, and that maybe it could be optimised by the makers in some way…
… i say this because aside from the main program itself, simply calculating a hash or backing up a file (and keeping track of the bytes offset to be able to resume a failed upload etc) shouldnt really take much memory at all?
i know that some cloud backups tend to rely on a lot of client-side processing, or pre-processing, such as to search for duplicate blocks or files, or to compress some files into a temporary area, but im sure they could tweak the code a bit so that the method of compression or the way it processes files can use a less menory-intensive approach.