Introduction
Since 2018, I’ve been in charge of CA next bank’s online mortgage platform, a project that remains, to this day, one of the most exciting I’ve ever had the chance to work on.
Originally, the platform had been developed under Symfony 3.4 LTS, a version whose end-of-life date was the end of 2021. In other words, it was high time to upgrade to the latest LTS version. However, in 2023, the bank decided not to do things by halves: in addition to migrating to the new version of Symfony, it opted for a change of hosting provider and some welcome adjustments to the platform. Suffice it to say that this migration, already technical enough, turned out to be much more complex (and full of surprises) than expected! In fact, it was this challenge that inspired me to write an article on the subject.
For obvious reasons of confidentiality, I’ll refrain from revealing the names of the two hosting providers - which we’ll simply call “old hosting provider” and “new hosting provider” - as well as any information that’s a little too sensitive to share.
Start
As the bank had already migrated the rest of its infrastructure to the new hosting provider, it was only logical that it should want to do the same with the mortgage platform.
On our side, we also had a few grievances with the old hosting provider. All the dependencies (PHP, MariaDB, Apache, etc.) had been installed only once, at the start of the project, by the hosting provider itself… and never updated since. This represented an explosive cocktail of security flaws, uncorrected bugs and performance problems.
Fortunately, internally, we had taken the initiative and migrated the project to Docker. This not only enabled us to update dependency versions with ease, but also to deploy the platform in a few clicks on the workstation of any developer joining the team, and on our internal server. The idea of deploying the platform at the new hosting provider as a Docker project seemed obvious to us: a simplified deployment process, reduced maintenance costs for the bank, and consistency with our other projects.
After a series of meetings with the bank and the new hosting provider, we got the go-ahead for a Docker Compose deployment. So everything seemed to be on track 😁.
Perfect! Between this change of hosting provider, the migration to Symfony, and the addition of new features, all we had to do was fine-tune our roadmap.
Priority number one, of course, was to change hosting provider while also migrating data from existing records. Just imagine: it already takes users several days to complete their mortgage application… So if we lost everything during the migration, it would be a disaster for both customers and the bank. But here’s where things get complicated: with the level of banking security implemented on the platform, it’s impossible to export existing data! In fact, the platform has been designed in such a way that even a malicious system administrator, with access to the disks and database, cannot extract or resell anything.
The second priority? Migrate all code to the latest LTS version of Symfony. This meant revising almost 100,000 lines of code, switching to a new version of PHP, migrating Symfony, reworking the back-office, and replacing the front-end. All in all, a lot of fun to come.
Finally, in third place came the addition of new functionalities to the platform, which will have to wait until the end of the previous 2 stages.
Data security
As mentioned above, it’s simply impossible to extract anything from the platform. Data is encrypted before it even enters the database, including images and PDFs uploaded by users. Everything is end-to-end encrypted, and the only way to access this information in clear text is via the online mortgage platform interface, with the appropriate roles and permissions, of course.
The encryption system is both simple and highly effective. It is based on several elements:
- The data to be encrypted (logical, no?)
- A salt, which changes at every save
- A pepper, which remains constant and can be found in the platform’s configuration files.
- A key calculated according to the hardware that hosts the platform.
This cocktail ensures that even with access to the disks or database, the information remains inaccessible without going through the platform itself.
The data
This encryption system is applied to all data, whether it’s the customer’s first name, the number of children they have, or even the PDF scan of their salary certificate. In short, everything that comes in is automatically encrypted for maximum security.
The salt
The salt used is unique to each table and each row, and changes with each insertion or update. This means that even if three customers have exactly the same name, the encrypted value inserted in the database will always be different.
The pepper
Unlike salt, pepper is a constant ingredient. It remains identical for all fields and all files. Its role is to reinforce the effectiveness of salt by adding a second key, which is not stored in the database. This considerably complicates bruteforce attempts, making the task even more difficult for hackers.
The hardware key
This is where things get interesting. The machine key is a subtle but essential element. It ensures that, even if a system administrator decides to copy the entire database, uploaded files and all the platform code (yes, we can imagine an XXL copy-paste), he still won’t be able to access the encrypted data. Why not? Because the key is calculated according to the machine on which the platform is running, making this copy… perfectly unusable elsewhere.
Data migration
Since extracting the data as is is simply impossible, we had to implement an additional deployment on the old platform prior to its retirement. This deployment introduced a new CLI command, whose mission is to retrieve all the data required for migration, serialize it in a flat file, and apply a light encryption with a predefined key. We also integrated the reverse command into the new platform’s code.
In this way, a bank system administrator can extract the data and transfer it to the new server, where it can then be imported without problem.
Code migration
Although the Symfony framework is a champion in the art of simplifying life for developers, including migrations, this phase nevertheless proved to be quite a challenge. Indeed, going from Symfony 3 to Symfony 6 involved an astronomical number of changes - and I’m talking about many, many changes! Whether in terms of folder structure, internal APIs or methods for performing certain tasks, the transition was anything but straightforward. Just one example: the authentication system, which has now adopted the passport concept. This involved a complete overhaul of the platform’s entire authentication layer. Suffice to say, it was a real headache, but also an excellent opportunity to modernize our approach!
Symfony strongly recommends a progressive approach to migration. The idea is to move from version a.4 to version b.0, then from b.0 to b.4, and so on, until you reach the desired version:
- 3.4 LTS
- 4.0
- 4.4 LTS
- 5.0
- 5.4 LTS
- 6.0
- 6.4 LTS
After analyzing the necessary changes and realizing that the separation between the application part and the business logic was already well in place, we opted for a complete reinstallation of a new Symfony 6.4 LTS project, rather than following the official procedure.
It’s risky, daring even, but it worked perfectly! This approach enabled us to obtain a site based on the latest version of Symfony, with a modern structure and functionalities, rather than a hybrid solution retaining the old architecture (for those who remember bundles and the parameters.yml file 😅).
The most time-consuming phase was the migration of annotations into attributes across the 100,000 PHP files. Unfortunately, Rector wasn’t much help, due to the sometimes erroneous old comments and the switch to typed variables, which modified the signatures of all functions. Not to mention our in-house encryption/decryption system, which complicated things for Doctrine.
Fortunately, we had over 2,000 unit, functional and non-regression tests, which enabled us to check that all the key elements worked without generating errors or warnings.
In the end, the code migration went off without a hitch, and we even had the opportunity to add a few extra tests along the way.
Deployment
The old environment, still in production, is ready to export its data. On our side, the code has been tested and is ready for deployment. All that remains to be done is to install all this on the new production server, run a few tests to check that everything is working correctly, then simply modify the DNS to redirect the domain name to the new server, while activating a maintenance page on the old server. In theory, nothing could be simpler!
Well, almost…
Docker yes, but not really
Remember when we got the go-ahead from our new hosting provider to use Docker Compose on their server? Well, it turns out that our contact didn’t really have the necessary technical skills! After more than a month of discussions, we were finally able to speak to someone technical on the hosting provider side, who explained that, due to:
- A partnership between the hosting provider and the server OS provider, who is pushing its own alternative tools.
- The hosting provider’s customized security overlay, which resets opened ports and shuts down services at regular intervals.
It was impossible to use Docker Compose on their server. Instead, we could use Podman.
I had never heard of Podman, but given the urgency of the situation, all we had to do was adapt.
Podman
Podman, although an alternative to Docker, offered such limited support for Docker Compose that it barely covered 10% of our needs.
We therefore had to write an impressively long bash script to execute each low-level Podman command, in order to reproduce what a simple “docker compose up -d” does in our local environment. This included:
- Downloading all the images
- Creating the various networks
- Initialize named volumes
- Mount bind volumes
- Attach containers to networks
- Map ports between containers
- Request firewall to open ports
- Redirect firewall traffic to containers
- Limit CPU and RAM consumption
- Limit the size of logs
- And much, much more…
This stage was a real agony. By contrast, migrating code and data seemed like a leisurely stroll in a lakeside park.
With enough stubbornness, we managed to come up with a magic script. This script takes care of putting the site into maintenance mode, updating the images, updating the database and then putting the site back online. Basically, it reproduces exactly what we had locally and on our internal servers, and what we had planned to deploy on the new hosting provider’s servers.
The big day
Stability and data migration tests have been successfully validated locally, on the pre-production server, and finally on the production server (temporarily with pre-production data). We’re ready for action!
The bank exports all data from the previous hosting provider’s production server and activates the maintenance page. Customers no longer have access to the platform, the clock is ticking, and we’ve got to move fast!
The export is transferred to the new server, and it’s up to us! We launch the data import command… Employee accounts, a multitude of folders, thousands of documents - if a single item fails to be imported, decrypted, and properly encrypted before being stored, everything will have to be cancelled!
Hurrah! Everything has been successfully imported!
Now let’s test the user paths! Fortunately, our end-to-end tests can be run on environments other than our local configuration. I update the domain name of our tests and run the procedure from my machine. At the same time, I check the back-office dedicated to employees.
The tests run smoothly all the way through. I manually check the back-office, and my test folder is there! Everything’s in order. I delete the test folder and inform the customer that the migration is complete. Now all they have to do is switch their DNS to the new server!
Conclusion
In the end, despite an installation on the new hosting provider’s servers that took longer than expected (thanks Podman…), everything went very smoothly. The platform is now on the latest LTS version of Symfony, we’ve migrated to the latest versions of PHP and MariaDB, replaced Apache with Nginx, and we now have the flexibility to update these dependencies at any time.
The customer was able to transfer their mortgage platform to the new hosting provider, and we are now in the process of implementing the new features requested by their business team.
The project was not only interesting, but also a lot of fun. And even though Podman and I aren’t exactly the best of friends (far from it), it’s always rewarding to learn new methods and technologies. It also gave me the opportunity to dive deep into the inner workings of Docker and Podman.
Satisfied customer, delighted SQLI team, and me, I got a lot of new knowledge and a good memory of this project. What more could you ask for?