-
Notifications
You must be signed in to change notification settings - Fork 109
Installing ScrumDo on your server.
Some information on what feature parity with scrumdo.com: Open Source vs. Commercial
On Ubuntu? There's some additional info here: Installing ScrumDo on Ubuntu
Read and follow the developer environment set up: Set-up
Most of getting a server up and running is in that.
We grab our live site right out of the production branch in GIT, if you want to do the same:
git checkout production
If you want to keep your version of ScrumDo the same as the one we run on scrumdo.com, you can upgrade with some commands like this:
git pull
python manage.py syncdb
python manage.db evolve --hint --execute
(restart apache)
We create tags every time we deploy to scrumdo.com in the format live-2010-12-23
Create a file called local_settings.py and put all your site-specific overrides to the settings in it. Here's what ours looks like:
GOOGLE_ANALYTICS = True
GOOGLE_ANALYTICS_ACCOUNT = 'OUR ANALYTICS KEY'
DEBUG = False
TEMPLATE_DEBUG = False
SECRET_KEY = 'OUR SECRET KEY'
EMAIL_HOST='OUR SMTP HOST'
EMAIL_HOST_USER='[email protected]'
EMAIL_HOST_PASSWORD='OUR SMTP PASSWORD'
EMAIL_PORT='587'
EMAIL_USE_TLS=True
CONTACT_EMAIL = "[email protected]"
DEFAULT_FROM_EMAIL = '[email protected]'
SERVER_EMAIL = '[email protected]'
SITE_NAME = "ScrumDo"
DATABASE_NAME = 'backlog'
DATABASE_USER = 'backlog'
DATABASE_PASSWORD = 'OUR DATABASE PASSWORD'
SCRUMDO_EXTRAS = ("extras.plugins.github_issues.GitHubIssuesExtra",)
Then, set up cron jobs to execute the following scripts:
- burnup_chart.py - Calculates burn up charts and velocity for all projects. Run this once a day.
- site_stats.py - Calculates site statistics that can be viewed here: http://scrumdo.com/stats Run this once a day
- manage.py send_mail - sends queued email. Run this often (we run it every other minute on ScrumDo.com)
- manage.py retry_deferred - retries failed email. Run this about once a day.
You can see the scripts we use to run these in the cron-scripts directory of the source. Our crontab looks something like this:
10 22 * * * scrumdo/cron-scripts/record_backlog.sh
11 22 * * * scrumdo/cron-scripts/site_stats.sh
*/2 * * * * scrumdo/cron-scripts/send_email.sh
1 1 * * * scrumdo/cron-scripts/resend_email.sh
*/5 * * * * scrumdo/cron-scripts/extras_sync.sh
2 1 * * * scrumdo/cron-scripts/extras_pull.sh
We use django-haystack plus solr for searching & filtering. If you want the search/filter options to work, you'll need to manually install solr
There is some information regarding the installation on the django-haystack help page.
After installing solr, generate the solr xml schema file.
python manage.py build_solr_schema > PATH_TO_SOLR/solr/conf/schema.xml
Then, rebuild your initial index.
python manage.py rebuild_index
Then you should be good to start searching. django-haystack is set up to auto-update the index as changes are made, so you shouldn't have to rebuild the index regularly. Occasionally, if the search index schema changes (apps.projects.search_indexes.py) you'll have to re-run the schema and rebuild jobs.
Next, you'll want to set up a real web server in front of your Django installation. We use Apache and mod_wsgi on scrumdo.com - here's some setup information: http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/
You'll want to set up your DNS to point at the site, perhaps install an SSL certificate.