Two more pro tips for development orchestration, and then, we’ll be on to running our application in production!
First, remember how I mentioned that if we ran mongo as it is, we’d lose all of our data?
Well, the fix, is to use a volume.
A volume is like a directory on your computer that’s backed up to the internet.
Say your “Music” folder, was really important. You could create a “PersistentVolume”, and sync your Music folder with it.
Same thing for a database. Most of them store their data in a certain directory, so, you’ll just have to sort through some documentation and figure out where it is.
For mongo, that place is `/data/db`. I found it on the docker hub page for the project. https://hub.docker.com/_/mongo/
Declaring persistent volumes varies a bit per orchestrator, but the same principles apply.
For our development orchestrator, docker-compose, we can declare a volume by adding a new section to our config named `volumes`.
It’s possible to specify more options after the colon, but this will create a volume using the default settings. Those are fine for now.
Next, we just need to reference our new volume by name.
Here’s what our new config looks like:
command: bash -c "npm i && npm run dev"
Earlier we mapped the current directory `.` to `/usr/src/service` in `my-container`.
This allowed us to synchronize the code from our repository with a directory inside of the container. If we update our code, `/usr/src/service` will reflect that, and if `/usr/src/service` is modified by the container, it will be reflected in our current directory.
This time, in `mymongodb` we synchronized `/data/db` inside of the container, with a persistent volume named `mongo`.
As mongodb persists it’s data, it’s saved to `/data/db` which is synchronized with `mongo`.
Now, if we restart our container, we won’t lose any data!
The magic is when you create your persistent volumes in the cloud.
Each cloud provider handles provisioning volumes with their specific technology. In AWS, your orchestrator will provision the appropriate AWS resources, such as EBS or EFS drives.
The provider implements the interfaces of the orchestrator.
The people with AWS certifications deal with those details. Or the people who make AWS, and they're pretty good at it too. ;)
This is super important, and powerful, so I want you to get this part:
This allows you to work with volumes in a uniform manner, and that configuration is easily moved between different clouds.
Bye bye, vendor lock-in!
And, you know, your data doesn’t get deleted! Always a plus.
There you have it. That’s one of the two tips but that's all for today!
Tune in tomorrow for my next development orchestration tip!
Then… how’d you like it if I showed you how to orchestrate your containers in production?
See you soon.
Patrick “Pro tips” Scott