As I like to read, write, think, laugh, test, play and sport... this is an area where I post things that move me. I write in Greek, German or English. Sometimes it is technical, some other times it's not. If you don't mind the variation of content and languages… feel free to join me here.
If you are a runner like me, and set a goal on running your first marathon you might be of need for a training plan.
Sure you may think, hey I know myself, feet are feet, I already ran half marathons, what’s the difference… a full marathon cannot be that more difficult. Hmm, believe me it is. It simply does not need some more training, it needs a great deal more and most importantly… it needs you to know you are doing the right thing even when you body talks back.
So… first you have to learn to distinguish ‘good pain’ (discomfort from leaving your comfort zone) from ‘bad pain’ (due to injury), then you need to ignore ‘good pain’, and use a plan on how to increase your training volume without overdoing it.
I’ve found various free marathon training plans by RUNNER’s WORLD and the Sport Fitness Advisor. Check them out and pick the one suiting your current state of fitness.
So I reconsidered deleting the files and started thinking about how to backup them instead.
A quick search in internet helped me find a php function for recursive_copy_file provided by Guilherme Serrano in GitHubGist. I modified my script to copy by another file name while copying it into a backup folder and delete the original files afterwards. This way I could in case I would like to, recall the content of the files without them actually still be accessible in their original location. I don’t care about their actual names or creation dates.
For my script to know which folder it should start checking, I’ve put it in the folder it should backup and used getcwd()
However, if my script was to delete all files in the directory and leave my script still on position, it had to check the extension of the pathinfo() and make sure it won’t do nothing for “php”, or only copy files of another specific extension.
But still, even if it renames after copying, and deletes the original files, this isn’t saving much webspace. To solve this, I used rename() instead of copy(), compressed the copied files thus in a gz and unlink() afterwards the renamed copy.
Finally, back in the control center of my webspace I only needed to select this script in the Cronjob manager as a new cronjob and define the intervals for this action to be performed automatically.