Career Profile
I am a highly competent and very enthusiastic person from Sweden. Loves new challenges and learning stuff. I value to fail quickly, learn from it even faster and make the best possible solution happen. When I am not working I like to spend time with family and friends, hacking on some new project, being out in the world traveling or try to find the answer to the meaning of life.
Experiences
In this role, I have been instrumental in driving technical decision-making and enhancing
the productivity of the engineering team. I led a major overhaul of the data pipeline,
resulting in a 400-fold increase in speed and improved horizontal scalability. Additionally, I
engaged in fruitful discussions with the CTO to identify the company's top priorities.
Through one-on-one mentoring sessions, lightning talks on system design, and hands-on
collaboration, I helped colleagues adopt best engineering practices. I also rewrote our
machine learning service to support faster iteration by our machine learning specialists.
Moreover, I developed a groundbreaking solution that enabled significantly higher feature
velocity for the product. This involved creating a brand-new data pipeline from scratch,
which was successfully deployed for production data. As a result, the CTO directed the
entire engineering team to focus on implementing new features for this revamped system.
Finally, I proposed and implemented a product management process that streamlined
feature prioritization, product strategy, and feature design, ultimately enhancing our
product's performance and competitiveness.
• Mentored colleagues through one-on-one sessions, system design talks, and hands-on collaboration to adopt best engineering practices
• Led a major overhaul of the data pipeline resulting in a 400-fold speed increase and improved horizontal scalability, driving technical decision-making and enhancing team productivity
• Developed a groundbreaking solution that enabled significantly higher feature velocity by creating a brand-new data pipeline from scratch
Part of building the Couchbase cloud. Was part of the team before it was launched so very much part of the beginning of the product. Written parts of the support service. Refactors tens of thousands lines of code. Worked on a slack integration. Have also updated the workflow for setting up our local development environment that all our developers have benefitted from. Other than that I have been adding features, troubleshoot and fixing bugs all around our code base.
Designing Calipsa’s systems to be able handle tens of thousands of alarms a minute without ever dropping a single alarm. Created Calipsa’s CI/CD-pipline and different testing and production environments. Helped the developers troubleshoot their code and also fixing issues they had. Cut costs of the Kubernetes cluster with 25% while also making the platform much more stable. Wrote some small programs in Golang, for example a monitoring system that alerted our on-call team when anomalies occurred.
- Created their CI/CD pipeline
- Lowered overall cost
- Made platform very stable
At TimeEdit was responsible for designing the infrastructure. I moved TimeEdit from bare metal servers with manual installations to a fully puppetized environment in google compute platform (GCP). I rewrote some of the backend systems from an anti scaling python solution to a 100% scalable solution written in Golang which also is highly available and running inside Kubernetes. Also started to work with redesign their core server product to be able to scale and be highly available.
- Puppetized the full environment
- Migrated the full service from on-premise bare metal to a cloud native application
I was hired as a speciallist and to travel Europe and help other teams with their on-site deployments. Could be everything from troubleshooting why one machine took twice as much time finishing the same task as a machine which in theory should look exactly the same. To writing code and puppet modules. Best practise and performance tuning systems was something that I did on a regular basis as well.
My main responsibility was storage and backup. I was in charge of the backup solutions at Spotify. Everything from capacity planning and buying the best hardware for the situation, to helping our teams with setting up backups and automated restore tests. I designed and developed the backup API that all of Spotify is now using. I was also in charge of Spotify’s largest storage clusters where all the new music from the record labels got uploaded. That work did also involve capacity planning and buying hardware. Helping teams that interact with our storage clusters to have and build a scale out solution and also troubleshooting issues that comes up. Besides the storage and backup part I also designed and developed other services at Spotify. For example the service that checks if any disk or PSU is broken in any of our servers at Spotify, back then we had around 10 000 servers monitored by this service.
- Designed Spotify’s backup solution
- Part of the SRE core team that had responsibility to Spotify up and running
- Capacity planned backup systems and original storage
- Designed and wrote multiple high availablity and high throughput systems
I was part of the flyover team at Apple, which is in charge of Apple’s 3D maps. There was only me and an other guy that were the system administrators and had the deeper understanding for building scalable systems we helped the other teams with scaleability issues. For our data-gathering team which is flying the airplanes and taking photos of the cities that we then use to generate the 3D maps. I developed the verification process which they still use today. It is a computer which you can insert the disks into with all the images they have taken during the day. They were then able to get results back if they needed to re-fly some parts or not. Before we had this solution they needed to send the disks to us in the office and then we did manual checks to verify. I was also in charge of our computing cluster which is generating all the 3D maps. This process is very data intensive because of how the data is structured.
Much the same workload as with Apple. What I did here additionally was to host all the 3D data so other companies could buy and use the 3D data we generated. Why it was the same workload was because it was the same product. Apple bought C3 so the technology and routines didn’t change too much.