As written by Adam Wiggins (http://12factor.net/): The twelve-factor app is a methodology for building software-as-a-service apps that:
- Use declarative formats for setup automation, to minimize time and cost for new developers joining the project;
- Have a clean contract with the underlying operating system, offering maximum portability between execution environments;
- Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration;
- Minimize divergence between development and production, enabling continuous deployment for maximum agility;
- And can scale up without significant changes to tooling, architecture, or development practices.
The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc).
In AgilTec we use twelve-factor to build your systems and services
One codebase tracked in revision control, many deploys
All your application code lives in one repository. A codebase is run by developers on their local machines, and deployed to any number of other environments, like a set of testing machines, staging serversr and the live production servers.
Explicitly declare and isolate dependencies
All the environments your code runs in need to have some dependencies, like a database, or an image processing library, or a command-line tool. We never let your application assume those things will be in place on a given machine. We ensure it by baking those dependencies into your software system. We use open source provisioning and configuration tools for this.
Store config in the environment
Configuration is anything that may vary between different environments. Code is all the stuff that doesn’t.
Usernames and passwords for various servers and services also count as configuration, and should never be stored in the code. This is especially true because your code is in source control, which means that anyone with access to the source will know all your service passwords, which is a bad security hole as your team grows.
All configuration data should be stored in a separate place from the code, and read in by the code at runtime. Usually this means when we deploy code to an environment, we copy the correct configuration files into the codebase at that time.
Treat backing services as attached resources
Your code will talk to many services, like a database, a cache, an email service, a queueing system, etc. These should all be referenced by a simple endpoint (URL) and maybe a username and password. They might be running on the same machine, or they might be on a different host, in a different datacenter, or managed by a cloud SaaS company. The point is, your code shouldn’t know the difference.
This allows great flexibility, so if we replace a local instance of Redis with one served by Amazon through Elasticache, then the code wouldn’t have to change.
This is another case where defining your dependencies cleanly keeps your system flexible and each part is abstracted from the complexities of the others…a core tenet of good architecture.
Strictly separate build and run stages
We use bullet-proof procedures. The release sends that code to a server in a fresh package together with the nicely-separate config files for that environment. Then the code is run so the application is available on those servers.
Execute the app as one or more stateless processes
The state of your system is completely defined by your databases and shared storage, and not by each individual running application instance
Export services via port binding
Your application will interfaces to the world using a simple URL. In a local development environment, the developer visits a service URL like http://localhost:5000/ to access the service exported by their app. In deployment, a routing layer handles routing requests from a public-facing hostname to the port-bound web processes.
Note also that the port-binding approach means that one app can become the backing service for another app, by providing the URL to the backing app as a resource handle in the config for the consuming app.
Scale out via the process model
Processes are a first class citizen. Processes in the twelve-factor app take strong cues from the unix process model for running service daemons. Using this model, the developer can architect their app to handle diverse workloads by assigning each type of work to a process type.
Maximize robustness with fast startup and graceful shutdown
The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice. This facilitates fast elastic scaling, rapid deployment of code or config changes, and robustness of production deploys.
Keep development, staging, and production as similar as possible
We use Vagrant to ensure that we are using the same backing services, the same configuration management techniques, the same versions of software libraries, and so on in all the environments.
Treat logs as event streams
Log files keep track of a variety of things, from the mundane (your app has started successfully) to the critical (users are receiving thousands of errors).
A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout.
Run admin/management tasks as one-off processes
We run one-off admin tasks from an identical environment as production. We don’t run updates directly against a database, we don’t run them from a local terminal window
Twelve-factor strongly favors languages which provide a REPL shell out of the box, and which make it easy to run one-off scripts. In a local deploy, we invoke one-off admin processes by a direct shell command inside the app’s checkout directory. In a production deploy, we can use ssh or other remote command execution mechanism provided by that deploy’s execution environment to run such a process.