Articles From Emily Freeman
Filter Results
Article / Updated 05-30-2024
The success of your DevOps initiative relies heavily on following the process, but it’s also important to use the right tools. Selecting a cloud service provider isn’t an easy choice, especially when DevOps is your driving motivation. GCP (Google Cloud Platform), AWS (Amazon Web Services), and Azure have more in common than they do apart. Often, your decision depends more on your DevOps team’s comfort level with a particular cloud provider or your current stack more than the cloud provider itself. After you’ve decided to move to the cloud, the next decision is to decide on a cloud provider that fits your DevOps needs. Here are some things to consider when evaluating cloud providers with DevOps principles in mind: Solid track record. The cloud you choose should have a history of responsible financial decisions and enough capital to operate and expand large datacenters over decades. Compliance and risk management. Formal structure and established compliance policies are vital to ensure that your data is safe and secure. Ideally, review audits before you sign contracts. Positive reputation. Customer trust is absolutely key. Do you trust that you can rely on this cloud provider to continue to grow and support your evolving DevOps needs? Service Level Agreements (SLAs). What level of service do you require? Typically cloud providers offer various levels of uptime reliability based on cost. For example, 99.9 percent uptime will be significantly cheaper than 99.999 percent uptime. Metrics and monitoring. What types of application insights, monitoring, and telemetry does the vendor supply? Be sure that you can gain an appropriate level of insight into your systems in as close to real-time as possible. Finally, ensure the cloud provider you choose has excellent technical capabilities that provide services that meet your specific DevOps needs. Generally, look for Compute capabilities Storage solutions Deployment features Logging and monitoring Friendly user interfaces You should also confirm the capability to implement a hybrid cloud solution in case you need to at some point, as well as to make HTTP calls to other APIs and services. The three major cloud providers are Google Cloud Platform (GCP), Microsoft Azure, and Amazon web Services (AWS). You can also find smaller cloud providers and certainly a number of private cloud providers, but the bulk of what you need to know comes from comparing the public cloud providers. Amazon Web Services (AWS) As do the other major public cloud providers, AWS provides on-demand computing through a pay-as-you-go subscription. Users of AWS can subscribe to any number of services and computing resources. Amazon is the current market leader among cloud providers, holding the majority of cloud subscribers. It offers a robust set of features and services in regions throughout the world. Two of the most well-known services are Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (Amazon S3). As with other cloud providers, services are accessed and infrastructure is provisioned through APIs. Microsoft Azure Before Microsoft launched this cloud provider as Microsoft Azure, it was called Windows Azure. Microsoft designed it to do just what the name implies — serve as a cloud provider for traditionally Windows IT organizations. But as the market became more competitive and Microsoft started to better understand the engineering landscape, Azure adapted, grew, and evolved. Although still arguably less robust than AWS, Azure is a well-rounded cloud provider focused on user experience. Through various product launches and acquisitions — notably GitHub — Microsoft has invested heavily in Linux infrastructure, which has enabled it to provide more robust services to a wider audience. Google Cloud Platform (GCP) The Google Cloud Platform (GCP) has the least market share of the three major public cloud providers but offers a substantial set of cloud services throughout nearly two dozen geographic regions. Perhaps the most appealing aspect of GCP is that it offers users the same infrastructure Google uses internally. This infrastructure includes extremely powerful computing, storage, analytics, and machine learning services. Depending on your specific product, GCP may have specialized tools that are lacking (or less mature) in AWS and Azure. Finding DevOps tools and services in the cloud Literally hundreds of tools and services are at your disposal through the major cloud providers. Those tools and services are generally separated into the following categories: Compute Storage Networking Resource management Cloud Artificial Intelligence (AI) Identity Security Serverless IoT Following is a list of the most commonly used services across all three of the major cloud providers. These services include app deployment, virtual machine (VM) management, container orchestration, serverless functions, storage, and databases. Additional services are included, such as identity management, block storage, private cloud, secrets storage, and more. It’s far from an exhaustive list but can serve as a solid foundation for you as you begin to research your options and get a feel for what differentiates the cloud providers. App deployment: Platform as a Service (PaaS) solution for deploying applications in a variety of languages including Java, .NET, Python, Node.js, C#, Ruby, and Go Azure: Azure Cloud Services AWS: AWS Elastic Beanstalk GCP: Google App Engine Virtual machine (VM) management: Infrastructure as a Service (IaaS) option for running virtual machines (VMs) with Linux or Windows Azure: Azure Virtual Machines AWS: Amazon EC2 GCP: Google Compute Engine Managed Kubernetes: Enables better container management via the popular orchestrator Kubernetes Azure: Azure Kubernetes Service (AKS) AWS: Amazon Elastic Container Service (ECS) for Kubernetes GCP: Google Kubernetes Engine Serverless: Enables users to create logical workflows of serverless functions Azure: Azure Functions AWS: AWS Lambda GCP: Google Cloud Functions Cloud storage: Unstructured object storage with caching Azure: Azure Blob Storage AWS: Amazon S3 GCP: Google Cloud Storage Databases: SQL and NoSQL databases, on demand Azure: Azure Cosmos DB AWS: Amazon Relational Database Service (RDS) and Amazon DynamoDB (NoSQL) GCP: Google Cloud SQL and Google Cloud BigTable (NoSQL) As you explore the three major cloud providers, you notice a long list of services. You may feel overwhelmed by the hundreds of options at your disposal. If, by chance, you can’t find what you need, the marketplace will likely provide something similar. The marketplace is where independent developers offer services that plug into the cloud — hosted by Azure, AWS or GCP. The table below lists additional services provided by most, if not all, cloud providers. Common Cloud Services Service Category Functionality Block storage Data storage used in storage-area network (SAN) environments. Block storage is similar to storing data on a hard drive. Virtual Private Cloud (VPC) Logically isolated, shared computing resources. Firewall Network security that controls traffic. Content Delivery Network (CDN) Content delivery based on the location of the user. Typically utilizes caching, load balancing and analytics. Domain Name System (DNS) Translator of domain names to IP addresses for browsers. Single Sign-On (SSO) Access control to multiple systems or applications using the same credentials. If you’ve logged into an independent application with your Google, Twitter or GitHub credentials, you’ve used SSO. Identity and Access Management (IAM) Role-based user access management. Pre-determined roles have access to a set group of features; users are assigned roles. Telemetry, monitoring and logging Tools to provide application insights on performance, server load, memory consumption and more. Deployments Configuration, infrastructure and release pipeline management tools. Cloud shell Shell access from a command-line interface (CLI) within the browser. Secrets storage Secure storage of keys, tokens, passwords, certificates and other secrets. Message Queues Dynamically scaled message brokers. Machine Learning (ML) Deep learning frameworks and tools for data scientists. IoT Device connection and management.
View ArticleArticle / Updated 08-16-2023
What is DevOps? It’s difficult to provide you with an exact DevOps prescription — because none exists. DevOps is a philosophy that guides software development, one that that prioritizes people over process and process over tooling. DevOps builds a culture of trust, collaboration, and continuous improvement. As a culture, the DevOps philosophy views the development process in a holistic way, taking into account everyone involved: developers, testers, operations folks, security, and infrastructure engineers. DevOps doesn’t put any one of these groups above the others, nor does it rank the importance of their work. Instead, a DevOps company treats the entire team of engineers as critical to ensuring that the customer has the best experience possible. DevOps evolved from Agile In 2001, 17 software engineers met and published the “Manifesto for Agile Software Development,” which spelled out the 12 principles of Agile project management. This new workflow was a response to the frustration and inflexibility of teams working in a waterfall (linear) process. Working within Agile principles, engineers aren’t required to adhere to original requirements or follow a linear development workflow in which each team hands off work to the next. Instead, they’re capable of adapting to the ever-changing needs of the business or the market, and sometimes even the changing technology and tools. Although Agile revolutionized software development in many ways, it failed to address the conflict between developers and operations specialists. Silos still developed around technical skill sets and specialties, and developers still handed off code to operations folks to deploy and support. In 2008, Andrew Clay Shafer talked to Patrick Debois about his frustrations with the constant conflict between developers and operations folks. Together, they launched the first DevOpsDays event in Belgium to create a better — and more agile — way of approaching software development. This evolution of Agile took hold, and DevOps has since enabled companies around the globe to produce better software faster (and usually cheaper). DevOps is not a fad. It’s a widely accepted engineering philosophy. DevOps focuses on people Anyone who says that DevOps is all about tooling wants to sell you something. Above all else, DevOps is a philosophy that focuses on engineers and how they can better work together to produce great software. You could spend millions on every DevOps tool in the world and still be no closer to DevOps nirvana. Instead, focus on your most important engineering asset: engineers. Happy engineers make great software. How do you make happy engineers? Well, you create a collaborative work environment in which mutual respect, shared knowledge, and acknowledgement of hard work can thrive. Company culture is the foundation of DevOps Your company has a culture, even if it has been left to develop through inertia. That culture has more influence on your job satisfaction, productivity, and team velocity than you probably realize. Company culture is best described as the unspoken expectations, behavior, and values of an organization. Culture is what tells your employees whether company leadership is open to new ideas. It’s what informs an employee’s decision as to whether to come forward with a problem or to sweep it under the rug. Culture is something to be designed and refined, not something to leave to chance. Though the actual definition varies from company to company and person to person, DevOps is a cultural approach to engineering at its core. A toxic company culture will kill your DevOps journey before it even starts. Even if your engineering team adopts a DevOps mindset, the attitudes and challenges of the larger company will bleed into your environment. With DevOps, you avoid blame, grow trust, and focus on the customer. You give your engineers autonomy and empower them to do what they do best: engineer solutions. As you begin to implement DevOps, you give your engineers the time and space to adjust to it, allowing them the opportunities to get to know each other better and build rapport with engineers with different specialties. Also, you measure progress and reward achievements. Never blame individuals for failures. Instead, the team should continuously improve together, and achievements should be celebrated and rewarded. You learn by observing your process and collecting data Observing your workflow without expectation is a powerful technique to use to see the successes and challenges of your workflow realistically. This observation is the only way to find the correct solution to the areas and issues that create bottlenecks in your processes. Just as with software, slapping some Kubernetes (or other new tool) on a problem doesn’t necessarily fix it. You have to know where the problems are before you go about fixing them. As you continue, you collect data — not to measure success or failure but to track the team’s performance. You determine what works, what doesn’t work, and what to try next time. Persuasion is key to DevOps adoption Selling the idea of DevOps to your leaders, peers, and employees isn’t easy. The process isn’t always intuitive to engineers, either. Shouldn’t a great idea simply sell itself? If only it were that easy. However, a key concept to always keep in mind as you implement DevOps is that it emphasizes people. he so-called “soft skills” of communication and collaboration are central to your DevOps transformation. Persuading other folks on your team and within your company to adopt DevOps requires practicing good communication skills. Early conversations that you have with colleagues about DevOps can set you up for success down the road — especially when you hit an unexpected speed bump. Small, incremental changes are priceless in DevOps The aspect of DevOps that emphasizes making changes in small, incremental ways has its roots in lean manufacturing, which embraces accelerated feedback, continuous improvement, and swifter time to market. Water is a good metaphor for DevOps transformations. Water is one of the world’s most powerful elements. Unless people are watching the flood waters rise in front of them, they think of it as relatively harmless. The Colorado River carved the Grand Canyon. Slowly, over millions of years, water cut through stone to expose nearly two billion years of soil and rock. You can be like water. Be the slow, relentless change in your organization. Here’s that famous quote from a Bruce Lee interview to inspire you: Be formless, shapeless, like water. Now you put water into a cup, it becomes the cup. You put water into a bottle, it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend. Making incremental changes means, for example, that you find a problem and you fix that problem. Then you fix the next one. You don’t take on too much too fast and you don’t pick every battle to fight. You understand that some fights aren’t worth the energy or social capital that they can cost you. Ultimately, DevOps isn’t a list of steps you can take, but is rather an approach that should guide the decisions you make as you develop.
View ArticleArticle / Updated 08-16-2023
DevOps has no ideal organizational structure. Like everything in tech, the “right” answer concerning your company’s structure depends on your unique situation: your current team, your plans for growth, your team’s size, your team’s available skill sets, your product, and on and on. Aligning your DevOps team’s vision should be your first mission. Only after you’ve removed the low-hanging fruit of obvious friction between people should you begin rearranging teams. Even then, allow some flexibility. If you approach a reorganization with openness and flexibility, you send the message that you’re willing to listen and give your team autonomy — a basic tenet of DevOps. You may already have a Python or Go developer who’s passionate and curious about infrastructure and configuration management. Maybe that person can switch into a more ops-focused role in your new organization. Put yourself in that person’s shoes. Wouldn’t you be loyal to an organization that took a risk on you? Wouldn’t you be excited to work hard? And that excitement is contagious. Here, you learn how to align the teams you already have in place, dedicate a team to DevOps practices, and create cross-functional teams — all approaches from which you can choose to orient your teams toward DevOps. You can choose one approach and allow it to evolve from there. Don’t feel that this decision is permanent and unmovable. DevOps focuses on rapid iteration and continual improvement and that’s the prime benefit of this methodology. That philosophy applies to teams as well. Aligning functional teams for DevOps In this approach, you create strong collaboration between your traditional development and operations teams. The teams remain functional in nature — one focused on ops, one focused on code. But their incentives are aligned. They will grow to trust each other and work as two teams yoked together. For smaller engineering organizations, aligning functional teams is a solid choice. Even as a first step, this alignment can reinforce the positive changes you’ve made so far. You typically start the alignment by taking the time to build rapport. Ensure that each person on both teams not only intellectually understands the other team’s role and constraints but also empathizes with the pain points. For this approach, it’s a good idea to promote a policy of “You build it, you support it.” This policy means that everyone — developer and operations person alike —participates in your on-call rotation. This participation allows developers to start understanding the frustrations of being called in the middle of the night and struggling while foggy-eyed and caffeine-deprived to fix a bug that’s impacting customers. Operations folks also begin to trust your developers’ commitment to their work. Even this small change builds an extraordinary amount of trust. A word of caution: If developers fight hard against being on call, a larger problem is at play in your organization. The pushback is not uncommon because being on call is wildly different from their normal day-to-day responsibilities. The pushback often comes from a place of discomfort and fear. You can help mitigate this reaction by addressing the fact that your developers may not know what to do the first few times they’re on call. They may not be familiar with the infrastructure, and that’s okay. Encourage them to escalate the incident and page someone with more experience. Finally, create a runbook with common alerts and what actions to take. Providing this resource will help to assuage some fear until they begin to get the hang of things. Another tactic to help spur collaboration to form a more cohesive DevOps team is to introduce a day of shadowing, with each team “trading” a colleague. The traded person simply shadows someone else on the team, sits at their desk (or in their area), and assists in their day-to-day responsibilities. They may help with work, discuss problems as a team (pair programming), and learn more about the system from a different point of view. This style of teaching isn’t prescriptive. Instead, it lends itself to curiosity and building trust. Colleagues should feel free to ask questions — even the “stupid” variety — and learn freely. No performance expectations exist. The time should be spent simply getting to know each other and appreciating each other’s work. Any productive output is a bonus! In this alignment approach, both teams absolutely must be involved in the planning, architecture, and development processes. They must share responsibilities and accountability throughout the entire development life cycle. Dedicating a DevOps team A dedicated DevOps team is more an evolution of the Sys Admin than a true DevOps team. It is an operations team with a mix of skill sets. Perhaps some engineers are familiar with configuration management, others IaC (infrastructure as code) and perhaps others are experts in containers or cloud native infrastructure or CI/CD (continuous integration and continuous delivery/development). If you think that putting a group of humans into an official team is enough to break down silos, you’re mistaken. Humans are more complex than spreadsheets. Hierarchy doesn’t mean anything if your silos have entered a phase in which they are unhealthy and tribal. In toxic cultures, a strongman style of leadership can emerge that is almost always followed by people taking sides. If you see this on your own team, you have work to do. Although any approach may work for your team, this dedicated team approach is the one you should think through the most. The greatest disadvantage of a dedicated DevOps team is that it easily becomes a continuation of traditional engineering teams without acknowledging the need to align teams, reduce silos, and remove friction. The risks of continuing friction (or creating more) are high in this approach. Tread carefully to ensure you’re choosing this team organization for a specific reason. The benefits of this DevOps approach is having a dedicated team to address major infrastructure changes or adjustments. If you’re struggling with operations-centered issues that are slowing down your deployments or causing site reliability concerns, this might be a good approach — even temporarily. A dedicated team if you’re planning on moving a legacy application to the cloud. But rather than calling this team a DevOps team, you might try labeling it an automation team. This dedicated group of engineers can focus completely on ensuring that you’ve set up the correct infrastructure and automation tools. You can then proceed with confidence that your application will land in the cloud without major disruption. Still, this approach is temporary. If you keep the team isolated for too long, you risk going down a slippery slope from rapid growth to embedded silo. Creating cross-functional product teams for DevOps A cross-functional team is a team formed around a single product focus. Rather than have separate teams for development, user interface and user experience (UI/UX), quality assurance (QA), and operations, you combine people from each of these teams. A cross-functional team works best in medium to large organizations. You need enough developers and operations folks to fill in the positions of each product team. Each cross-functional team looks a bit different. It’s a good idea to have, at a minimum, one operations person per team. Do not ask an operations person to split their responsibilities between two teams. This scenario is unfair to them and will quickly create friction between the two product teams. Give your engineers the privilege of being able to focus and dig deep into their work. If you’re organization is still small or in the startup phase, you can think of your entire engineering organization as a cross-functional team. Keep it small and focused. When you begin to approach having 10–12 people, start thinking about how you can reorganize engineers. The image below shows what your cross-functional teams could look like. But keep in mind that their composition varies from team to team and from organization to organization. Some products have a strong design focus, which means that you may have multiple designers in each team. Other products are technical ones designed for engineers who don’t care much for aesthetics. Teams for that kind of product may have one designer — or none at all. If your organization is large enough, you can certainly create multiple teams using different DevOps ideas and approaches. Remember that your organization is unique. Feel empowered to make decisions based on your current circumstances and adjust from there. Here are some possible combinations of various types of product teams. Legacy Product Team: Project Manager (PM), Front-end Developer, Back-end Developer, Back-end Developer, Site Reliability Engineer (SRE), Automation Engineer, QA Tester Cloud Transformation Team: SRE, SRE, Operations Engineer, Automation Engineer, Back-end Developer MVP Team: PM, Designer, UX Engineer, Front-end Developer, Backend Developer, Operations Engineer The downside of a cross-functional product team is that engineers lose the camaraderie of engineers with their same skill sets and passions. Having a group of like-minded individuals with whom you can socialize and from whom you can learn is an important aspect of job satisfaction. Check out a solution to this issue below. As shown below, you can give your engineers dedicated work time to spend with their tribes. You can do something as generous as paying for lunch once every week so that they can get together and talk. Or you might provide 10–20 percent of work time for them to work on projects as a tribe. Either way, you need your engineers to stay sharp. Tribes share industry knowledge, provide sound feedback, and support career growth. Provide time for your engineers to learn from people with whom they share education, experience, and goals. This time provides a safe place where they can relax and feel at home. No amount of perfect finagling will overcome the shortfalls of a bad organizational culture. But if you’ve paid attention so far and made the appropriate strides, the next step is to form teams that reinforce the cultural ideals you’ve already put in place.
View ArticleArticle / Updated 06-06-2023
The term DevOps (a combination of software development and operations) refers to a set of practices, tools, and cultural philosophy that automate and integrate the work of software development and IT teams. Marrying the cloud with your DevOps practice can accelerate the work you’ve already accomplished. When used together, both DevOps and the cloud can drive your company’s digital transformation. You’ll see results as long as you emphasize the priorities of DevOps: people, process, and technology. The cloud — along with other tooling — falls squarely into the technical part of your DevOps implementation. Cloud computing enables automation for your developers and operations folks in a way that simply isn’t possible when you manage your own physical infrastructure. Provisioning infrastructure through code in the cloud — which is a system referred to as Infrastructure as Code (IaC) — enables you to create templates and repeatable processes. When you track changes to your infrastructure code through source control, you permit your team to operate seamlessly and track changes. IaC is much more repeatable and automated — not to mention faster — than having engineers click around a portal. Even instructions on the portal aren’t fool-proof. You risk making small, yet significant, changes to infrastructure setup if you consistently build the same setup through the portal rather than a YAML file. Taking your DevOps culture to the cloud People often speak about DevOps and cloud computing as if they are intertwined and, in many ways, they are. Be aware, however, that you can adopt DevOps — or begin to transform your engineering organization — without going all in on the cloud. It’s perfectly reasonable that you first establish the standards, practices, and processes for your team before you shift your infrastructure to a cloud provider. Although people speak as though everyone is already on the cloud, you are still on the cutting edge of the shift to the cloud. Cloud providers are becoming more robust by the day, and engineering companies are slowly transitioning their self-hosted services to the cloud. With that in mind, an organization seeking to adopt DevOps would be wise to strongly consider utilizing the services of a major cloud provider. Anyone with DevOps experience wouldn’t likely call the cloud a NoOps solution, but they might call it OpsLite. Cloud services often abstract complex operations architecture in a way that makes that architecture more friendly to developers and empowers them to take more ownership of their components. If you’ve ever grumbled that developers should be included in an on-call rotation, you’re right — they should be. Including developers in the on-call rotation is a great way to ramp up their knowledge of deploying code as well as managing and provisioning the infrastructure on which their services run. This reduces operational overhead and frees up the time of operations specialists to work on proactive solutions. Learning through DevOps adoption If your team is capable of adopting DevOps and shifting toward utilizing cloud computing at the same time, you can use these shifts as learning opportunities for both developers and operations folks. While your team shifts to the cloud, developers have the opportunity to familiarize operations specialists with code — perhaps even specific languages — and source control, and operations folks can teach developers about infrastructure. When both groups are both the experts and the newbies, neither group has to deal much of an ego-damaging transfer of knowledge. The trust, rapport, and healthy dynamic that emerge from these interactions will galvanize your team and last much longer than the immediate work took. In many ways, you’re reinforcing your DevOps culture through tooling your DevOps practice. Benefitting from cloud services in your DevOps initiative Modern operations is changing and evolving. Your competitors are already adopting new ways of innovating faster and accelerating their software delivery life cycles. Cloud computing represents a big shift from the traditional way businesses think about IT resources. By outsourcing much of your infrastructure and operations requirements to a cloud provider, you reduce overhead and free your team to focus on delivering better software to your users. Here are six common reasons organizations are turning to cloud computing services: Improving affordability. Cloud providers allow you to select only the services you need, when you need them. Imagine if you could access cable TV but pay for only the channels you watch. You’d love that, wouldn’t you? Most DevOps team members would! Cloud providers do just that while also providing you with the most up-to-date computing hardware housed in physically secure datacenters. Automating deployments. Changes to the system — deployments — are the most common contributors of outages or service disruptions. Cloud providers make releasing code an automated, repeatable process, significantly decreasing the probability of making mistakes in manual releases and introducing bugs. Automated deployments also enables developers to release their own code. Ultimately, automated deployments simplify the process while reducing site downtime and reactionary triaging in production. Accelerating delivery. The cloud reduces friction along nearly every phase of the software delivery life cycle. Although set up is required, it often takes no more than double the time required to do the process manually, and you have to set up a service or process only once. Accelerated delivery gives you a ton of flexibility. Increasing security. Cloud providers make security part of their offering. Microsoft Azure, Amazon web Services (AWS), and Google Cloud Platform (GCP) meet different compliance standards and provide policies, services, and controls that will help you reinforce your system’s security. In addition, if you utilize a deployment pipeline tool within the cloud, you can add security checks before new code is released to an environment, thereby reducing the possibility of security vulnerabilities. Decreasing failure. Through cloud build and release pipelines, your team is capable of creating automated tests to confirm functionality, code quality, security, and compliance of any code introduced into your systems. This capability decreases the possibility of bugs while also reducing the risk of problematic deployments. Building more resilient and scalable systems. The cloud allows organizations to scale up, scale out, and increase capacity within seconds. This elastic scaling enables spinning up compute and storage resources as needed, no matter where in the world your users interact with your product. This approach permits you to better serve your customers and more efficiently manage infrastructure costs. The DevOps approach is all about creating a cyclical method where you benefit and learn from the process each time you go through it.
View ArticleArticle / Updated 04-17-2023
Improving engineering performance as part of the DevOps process can have sweeping impacts on the entire business. Streamlining the development life cycle and removing bottlenecks will serve to accelerate the overall performance of the business — ultimately increasing the bottom line. And if you think, as a DevOps engineer, that you shouldn’t have to care about the business performance, you’re wrong. According to DevOps Research and Assessment (DORA), high-performing DevOps teams consistently outpace their competitors in four key areas: Deployment frequency: This term refers to how often your engineers can deploy code. Improving performance aligns with deploying multiple times per day as desired. Lead time: Lead time is how long you take to go from committing new code to running that code in a production environment. The highest performers, according to DORA, have a lead time of under an hour, whereas average performers need up to a month. MTTR (Mean Time to Recover): MTTR refers to how long you take to restore a service after an incident or outage occurs. Ideally, you want to aim for under an hour. An outage costs serious money, especially when it impacts profit centers of the application. Long outages destroy trust, decrease morale, and imply additional organizational challenges. Change failure: This term refers to the rate at which changes to your system negatively impact the performance. Although you will never reach a change failure rate of zero percent, you can absolutely approach zero by increasing your automated tests and relying on a deployment pipeline with continuous integration checks and gates — all of which ensure quality. Eliminating perfection as a measure of DevOps success DevOps relies on the mantra “Done is better than perfect.” It seems to be one of these impossible-to-attribute quotations, but the words nonetheless speak truth. Attempting to attain perfection is an enemy of effectiveness and productivity. Most engineers, including those of the DevOps variety, suffer from some version of analysis-paralysis — a mental affliction that limits your productivity in an attempt to overanalyze your work and sidestep any potential mishap. Training imperfection into your work requires you to embrace the possibility of failure and the inevitability of refactoring. Creating feedback loops around the customer and looping back to various stages of the pipeline are primary tenants of DevOps. In DevOps, you’re connecting the ends to bend the line into a circle. When you think iteratively and circularly, pushing out code that’s not perfect seems a lot less scary because the code isn’t carved into stone. Instead, it’s in a temporary state that DevOps engineers improve frequently as you gather more data and feedback. Designing small teams for DevOps You’ve likely heard of Amazon’s “two-pizza” teams. The concept broadly speaks to the importance of small-sized teams. Now, the exact number of people that comprise a two-pizza team varies according to your appetites. It’s a good idea to keep teams under 12 people. When a group approaches 9, 10, or 11 people, try splitting it into two. The sweet spot for group size is around 4–6 people. Your exact number may vary depending on the people involved, but the point is this: When groups get too large, communication becomes challenging, cliques form, and the teamwork suffers. Here’s one other bonus goal when forming DevOps teams: even numbers. It’s a good idea to give people a “buddy” at work — someone they can trust above all others. In even-numbered groups, everyone has a buddy and no one is left out. You can pair off evenly and it tends to work well. Forming even-numbered groups isn’t always achievable because of personnel numbers, but it’s something to keep in mind. A formula for measuring communication channels is n (n – 1) / 2, where n represents the number of people. You can estimate how complex your team’s communication will be by doing a simple calculation. For example, the formula for a two-pizza team of 10 would be 10 (10 – 1) / 2 = 45 communication channels. You can imagine how complex larger teams can become. Tracking your DevOps work If you can get over the small overhead of jotting down what you do every day, the outcomes will provide you with exceptional value. Having real data on how you use your time assists you in tracking you and your team’s efficacy. As Peter Drucker famously said, “If you can't measure it, you can't improve it.” How many days do you leave work feeling like you did nothing? You just had meeting after meeting or random interruptions all day. You’re not alone. Many workers have the same problem. It can be difficult to track your progress and therefor your productivity. The divergence between our feelings of efficacy and the reality of our efficacy is dangerous territory for any DevOps team. Try using pen and paper rather than some automated tool for this. Yes, you can use software to track how you use your time on your computer. It can tell you when you’re reading email, when you’re slacking, and when you’re coding, but it lacks nuance and often misses or incorrectly categorizes large chunks of time. After you have an idea of what you’re doing and when, you can start to identify which activities fall into which quadrants of the Eisenhower Decision Matrix. What busy work are you doing routinely that provides no value to you or the organization? Reducing friction in DevOps projects One of the best things a manager can do for a DevOps engineering team is to leave them alone. Hire curious engineers who are capable of solving problems independently and then let them do their job. The more you can reduce the friction that slows their engineering work, the more effective your team will be. Reducing friction includes the friction that exists between teams — especially operations and development. Don’t forget specialists like security, too. Aligning goals and incentives increases velocity. If everyone is focused on achieving the same things, they can join together as a team and move methodically toward those goals. Humanizing alerting for DevOps success Every engineering team has alerts on actions or events that don’t matter. Having all those alerts desensitizes engineers to the truly important alerts. Many engineers have becomes conditioned to ignore email alerts because of an overabundance of messages. Alert fatigue ails many engineering organizations and comes at a high cost. If you’re inundated daily, picking out the important from a sea of the unimportant is impossible. You could even say that these messages are urgent but not important . . . . Email is not an ideal vehicle for alerting because it’s not time sensitive (many people check email only a few times a day) and it’s easily buried in other minutiae. Applying what you’ve learned about rapid iteration, reevaluate your alerting thresholds regularly to ensure an appropriate amount of coverage without too many false positives. Identifying which alerts aren’t necessary takes time and work. And it’ll probably be a little scary, right? Deleting an alert or increasing a threshold always comes with a bit of risk. What if the alert is actually important? If it is, you’ll figure it out. Remember, you can’t fear failure in a DevOps organization. You must embrace it so that you can push forward and continuously improve. If you let fear guide your decisions, you stagnate — as an engineer and as an organization.
View ArticleArticle / Updated 07-29-2022
When done correctly, DevOps offers significant advantages for your organization. This article presents the key points to know about how DevOps benefits your organization. Use it as a reference to help you persuade your colleagues or to reinforce your understanding of why you chose to go the DevOps route when the road gets bumpy. DevOps helps you accept constant change The tech landscape is an ever-changing environment. Some languages evolve and new ones are created. Frameworks come and go. Infrastructure tooling changes to meet the ever-growing demands for hosting applications more efficiently and delivering services more quickly. Tools continue to abstract low-level computing to reduce engineering overhead. The only constant is change. Your ability to adapt to that change will determine your success as an individual contributor, manager, or executive. Regardless of the role you currently fill at your company or hope to eventually play, it is vital to adapt quickly and remove as much friction from growth as possible. DevOps enables you to adapt and grow by improving communication and collaboration. DevOps embraces the cloud The cloud isn’t the future; it’s now. Although you may still be transitioning or not yet ready to move, realize that the cloud is the way forward for all but a few companies. It gives you more flexibility than traditional infrastructure, lowers the stress of operations, and (usually) costs significantly less because of a pay-as-you-go pricing structure. Public, private, and hybrid clouds give you endless possibilities to run your business better. The ability to spin up (launch) resources within minutes is something most companies have never experienced prior to the cloud. This agility provided by the cloud goes hand in hand with DevOps. Omri Gazitt from Puppet, a company focused on automation and configuration management, put it best: “As organizations move to the cloud, they are revisiting their core assumptions about how they deliver software.” With the cloud, APIs connect every service, platform, and infrastructure tool so that you can manage your resources and application seamlessly. As you migrate to the cloud, you can reevaluate past architecture decisions and slowly transition your application and system to be cloud-native, or designed with the cloud in mind. DevOps helps you hire the best Because of increased demand, great engineers are scarce. There simply aren’t enough engineers to fill all the jobs currently open or to meet market demand over the next decade and beyond. Although finding engineers can be difficult, it’s not impossible, especially if you focus on discovering engineers who embrace curiosity and aren’t afraid to fail. If you implement DevOps in your overall engineering culture, you can level up engineers and train them in the methodology and technology that supports continuous improvement. It’s difficult to measure potential in an interview. Usually, talent whispers. The most talented engineers typically aren’t gregarious or braggarts; they let their work speak for them. DevOps enables you to listen more closely to the personal and professional interests of the engineers you interview. Try choosing candidates based on their level of curiosity, communication skills, and enthusiasm. Those qualities can see your team through the troughs of fear, uncertainty, and doubt. They can carry the team through hard decisions, made within constraints, in their attempt to solve difficult problems. You can teach someone a skill, but teaching someone how to learn is an entirely different matter. The learning culture you create in your DevOps organization enables you to prioritize a growth mindset over technical prowess. In DevOps, hiring for the team is critical. Every individual is a piece of a whole, and the team must have balance holistically. Achieving this balance means that sometimes you don’t hire the “best” engineer, you hire the best engineer for the team. When you hire for the DevOps team you can, like draft horses yoked together, pull more weight than you could individually. With DevOps, you can multiply the individual components of your team and, as a whole, create a powerhouse of a team. DevOps keeps you competitive The yearly State of DevOps Report released by DevOps Research and Assessment (DORA) makes it clear: Companies across the world are using DevOps to adjust their engineering practices and are reaping the benefits. They see increases in engineering production and reductions in cost. With DevOps, these companies are shifting from clunky processes and systems to a streamlined way of developing software focused on the end user. DevOps enables companies to create reliable infrastructure and utilize that infrastructure to release software more quickly and more reliably. The bottom line is this: High-performing organizations use DevOps, and they’re crushing their competition by increasing their deployment frequency and significantly decreasing their failures that occur because of changes in the system. If you want to compete, you must adopt the solid DevOps methodologies. Maybe not all of them, and definitely not all at one time — but the time to wait and see whether DevOps is worthwhile has passed. DevOps helps solve human problems Humans have reached a point in our evolution at which technology is evolving faster than our brains. Thus the greatest challenges humans face are due to human limitations — not the limitations of software or infrastructure. Unlike other software development methodologies, DevOps focuses holistically on your sociotechnical system. Embracing DevOps requires a shift in culture and mindset. But if you achieve a DevOps culture and mindset, you and your organization reap almost limitless benefits. When engineers are empowered to explore, free of the pressure and fear of failure, amazing things happen. Engineers discover new ways to solve problems. They approach projects and problems with a healthy mindset and work together more fluidly, without needless and negative competition. DevOps challenges employees DevOps accelerates the growth of individual engineers as well as that of the engineering team as a whole. Engineers are smart people. They’re also naturally curious. A great engineer who embraces a growth mindset needs new challenges after mastering a particular technology, tool, or methodology or they often feel stagnant. They need to feel as if their brain and skill sets are being stretched — not to the point of being overwhelmed or stressed, but enough to feel that they’re growing. That is the tension described by Dan Pink in Drive. If you can strike that balance, your engineers will thrive — as individuals and as a team. The methodology of DevOps promotes T-shaped skills, which means that engineers specialize in one area with deep knowledge and have a broad understanding of many other areas. This approach allows engineers to explore other areas of interest. Perhaps a Python engineer has an interest in cloud infrastructure, for example. No other engineering methodology permits and encourages engineers to explore as much as DevOps does, and it’s a huge contributor to hiring and retaining talent. DevOps bridges gaps One of challenges of modern technology companies is this gap between the needs of the business and the needs of engineering. In a traditional company, with traditional management strategies, a natural friction exists between engineering and departments like marketing, sales, and business development. This friction stems from a lack of alignment. Each department is measured by different indicators of success. DevOps seeks to unify each department of a business and create a shared understanding and respect. That respect for each other’s jobs and contributions is what allows every person in the company to thrive. It removes the friction and improves acceleration. Think about a team of sled dogs. If each dog is moving in separate directions, the sled goes nowhere. Now imagine the dogs working together, focused on moving forward — together. When you lack friction internally, the only challenges you face are external, and external challenges are almost always more manageable than internal strife. DevOps lets you fail well Failure is inevitable. It’s simply unavoidable. Predicting every way in which your system can fail is impossible because of all the unknowns. (And it can fail spectacularly, can’t it?) Instead of avoiding failure at all costs and feeling crushed when failure does occur, you can prepare for it. DevOps prepares organizations to respond to failure, but not in a panicky, stress-induced way. Incidents will always involve some level of stress. At some point along your command structure, an executive is likely to scream at the money being lost during a service outage. But you can reduce the stress your team experiences by using failure as a way of learning and adapting your system to become more resilient. Each incident is an opportunity to improve and grow, as individuals and as a team. DevOps embraces kaizen, the art of continuous improvement. When your team experiences flow in their work, they can make tiny choices every day that contribute to long-term growth and, ultimately, a better product. DevOps lets you continuously improve Continuous improvement is a key ingredient in DevOps. Use the visualization of a never-ending cycle when applying DevOps to your organization. The cycle shouldn’t invoke fears through thoughts of Sisyphus, pushing a boulder up a hill for all eternity. Instead, think of this cycle as movement, like a snowball rolling downhill, gathering momentum and mass. As you adopt DevOps and integrate more and more of its core tenets into your everyday workflow, you’ll witness this acceleration first-hand. The cycle of continuous improvement should always center around the customer. You must continuously think about the end user and integrate feedback into your software delivery life cycle. Fundamental to this cycle is CI/CD. Adopting CI/CD isn’t an all-or-nothing requirement of DevOps; instead, it’s a slow process of implementation. You should focus on mastering continuous integration first. Encourage engineers to share code freely and merge code frequently. This approach prevents isolation and silos from becoming blockers in your engineering organization. After your organization masters continuous integration, move on to continuous delivery, the practice of automating software delivery. This step requires automation because code will move through multiple checks to ensure quality. After all your code is secure and accessible in a source code repository, you can begin to implement small changes continuously. Your goal is to remove manual barriers and improve your team’s ability to discover and fix bugs without customer impact. DevOps automates toil Acceleration and increased efficacy are at the core of the DevOps methodology. By automating labor-intensive manual processes, DevOps frees engineers to work on projects that make the software and systems more reliable and easily maintained — without the chaos of unexpected service interruptions. Site reliability engineering (SRE) deals with toil, which is the work required to keep services up and running but is manual and repetitive. Toil can be automated and lacks long-term value. Perhaps most important of all, toil scales linearly, which limits growth. Note that toil doesn’t refer to the overhead of administrative necessities such as meetings and planning. This type of work, if implemented with a DevOps mentality, is beneficial to the long-term acceleration of your team. One of the core tenets of tooling your DevOps practice is automation. You can automate your deployment pipeline to include a verbose test suite as well as other gates through which code must pass to be released. In many ways, SRE is the next logical step in the evolution of DevOps and should be your next step after you and your organization master the core concepts of DevOps and implement the practice in your team. DevOps accelerates delivery The software delivery life cycle has evolved from the slow and linear Waterfall process to an agile and continuous loop of DevOps. You no longer think up a product, develop it fully, and then release it to customers, hoping for its success. Instead, you create a feedback loop around the customer and continuously deliver iterative changes to your products. This connected circuit enables you to continuously improve your features and ensure that the customer is satisfied with what you’re delivering. When you connect all the dots and fully adopt DevOps in your organization, you watch as your team can deliver better software faster. The changes will be small at first, just like the changes you release. But over time, those seemingly insignificant changes add up and create a team that accelerates its delivery of quality software.
View ArticleCheat Sheet / Updated 03-02-2022
A surprising facet of the DevOps engineering practice for software development is that it focuses more on the people and process of an organization than the specific tools or technologies that the engineers choose to utilize. DevOps offers no silver bullet, but it can have a massive impact on your organization and your products as an engineering culture of collaboration, ownership, and learning with the purpose of accelerating the software development lifecycle from ideation to production.
View Cheat SheetArticle / Updated 11-12-2019
The DevOps approach involves a cycle as opposed to a line. It allows for continuous integration and continuous delivery, garnering consistent feedback throughout the process. The DevOps methodology is just one example of how processes have evolved. Development processes have changed radically over the last few decades, and for good reason. In the 1960s, Margaret Hamilton led the engineering team that developed the software for the Apollo 11 mission. You don’t iteratively launch humans into space — at least they didn’t in the 1960s. It’s not an area of software in which “fail fast” feels like a particularly good approach. Lives are on the line, not to mention millions of dollars. Hamilton and her peers had to develop software using the waterfall methodology. The image below shows an example of a waterfall development process (occurring in a straight line). The following image adds the phases. Notice how the arrows go in one direction. They show a clear beginning and a clear end. When you’re done, you’re done. Right? Nope. As much as many people would like to walk away from parts of their codebases forever (or kill them with fire), they usually don’t get the privilege. The software developed by Hamilton and her team was a wild success (it’s mind blowing to think that they developed in Assembly with zero helpers like error messaging). Not all projects were equally successful, however. Later, where waterfall failed, Agile succeeded. (DevOps was born out of the Agile movement.) Agile seeks to take the straight line of waterfall and bend it into a circle, creating a never-ending circuit through which your engineering team can iteratively and continuously improve. The image below depicts how to think of the circular development life cycle. Often, the various loops prescribed by different organizations are influenced by the products those vendors sell. For instance, if the vendor sells infrastructure software and tooling, they likely emphasize that portion of the development life cycle, perhaps focusing most on deploying, monitoring, and supporting your software. There’s nothing for sale here. The stages focused on here are the most critical for developers, along with the ones people struggle with the most when learning to better manage their software development and adopt DevOps. The five stages of the software development life cycle are Planning: The planning phase of your DevOps development process is perhaps the most key to your DevOps mission. It sets you up for success or failure down the road. It’s also the most fertile time to bring everyone together. By everyone, this means business stakeholders, sales and marketing, engineering, product, and others. Designing: In most companies, the designing phase is merged into the coding phase. This monstrous amalgam of design and code doesn’t permit a separation of the architectural strategy from implementation. However, if you leave things like database design, API logistics, and key infrastructure choices to the end of the development pipeline — or, perhaps worse, to the individual developers working on separate features — you’ll quickly find your codebase to be as siloed as your engineering team. Coding: The actual development of features is the face of the DevOps process and gets all the glory. But this is one of the least important steps in your development life cycle. In many ways, it’s simply the execution of the preceding areas of your pipeline. If done well, coding should be a relatively simple and straightforward process. Now if you’re a developer and just gasped at that last sentence because you’ve dealt with hundreds of random and difficult-to-solve bugs, It’s easy to understand how you feel. Coding is hard. Nothing about software development is easy. But by mastering the planning, design, and architecture (and separating them from the actual implementation of code), you ensure that the hardest decisions of software development are abstracted away. Testing: Testing is an area of your pipeline in which engineers from all areas of expertise can dive in and get involved, enabling a unique opportunity for learning about testing, maintainability, and security. There are many The six stages of the software development life different types of tests to ensure that your software works as expected. Deploying: Deploying is the stage that is perhaps the most closely associated with operations. Traditionally, your operations team would take the code developed by your developers and tested by your quality assurance (QA) team and then release it to customers — making them alone responsible for the release process. DevOps has had an enormous impact at this phase of the development process. Also, deploying is one of the areas from which to find the most automation tools to pull. From a DevOps perspective, your priority is simplifying the deployment process so that every engineer on your team is capable of deploying their code. This is not to say that operations doesn’t have unique knowledge, or that operations teams may be disbanded. Operations folks will always have unique knowledge about infrastructure, load balancing, and the like. In fact, removing the manual task of deploying software from your operations team will allow them to save you time and money elsewhere. They will have the time to work on improving your application’s reliability and maintainability. The most important aspect of a delivery life cycle within the DevOps framework is that it is a true loop. When you get to the end, you go right back to the beginning. Also, if you receive support feedback from customers at any point along the way, go back to a subsequent phase (or the planning stage) so that you can develop software in a way that best serves your customers. The first part of building a pipeline is to treat it linearly. You are building a straight line with set stages and checkpoints along the way. Within this framework, you can view the software development life cycle as something you start and something you finish. Waterfall lovers would be proud. But reality doesn’t let you work in a straight line. You can’t just start producing code, finish, and walk away. Instead, you’re forced to build upon the foundational software you released on your first iterative loop and improve it through the second cycle. And so on and so on. The process never ends, and you’ll never stop improving. The DevOps process helps you connect the start and finish of that straight pipeline so that you begin to understand it as an entire circuit, or loop, for you to continuously develop and improve.
View ArticleArticle / Updated 11-12-2019
It can be difficult to assess candidates for the right skillset when hiring for DevOps jobs…but not impossible. With a little creativity and willingness to step outside the box, you can use interview techniques to help find candidates with the right technical skills for your DevOps initiatives. The age of obtuse riddles and sweat-inducing whiteboard interviews is waning — and for good reason. If a whiteboard interview is facilitated by an engineer who cares more about tricking the candidate than they do about discussing a technical conversation, you’ll go nowhere fast. Whiteboarding interviews have taken a lot of heat recently for putting underrepresented and marginalized groups — which includes women and people of color — at a disadvantage. In this age, it’s absolutely vital for tech companies to hire diverse workforces, so this situation is unacceptable. However, you have to somehow gauge a person’s technical ability. What’s the answer? Well, the good news is you have options. (The bad news is . . . you have options.) How you hire will determine who you are. Revisit the whiteboard interview for DevOps job candidates The whiteboard interview was never intended to be what it has become. In one whiteboard interview, the DevOps candidate was handed a computer program printed on eight sheets of paper. The instructions? “Debug the program.” Umm . . . excuse me? The whiteboard interview has become a situation in which you give a candidate a seemingly impossible problem, send them up to the board with a marker, and watch them sweat profusely while four or five people observe their panic. This type of interview provides no one with quality information on whether either the employer or the interviewee is a good fit for the other party. Although others have called for the elimination of the whiteboard interview, here’s a more nuanced suggestion: Change it to make it fit your DevOps needs. Make it a discussion between two people about a piece of code or a particular problem. Don’t make the problem something crazy, such as balancing a binary search tree. Unless the job you are interviewing for is literally writing code in Assembly, you do not need to evaluate the candidate’s ability to write Assembly. Be cognizant of the DevOps job you are looking to fill, the skill sets required, and the best way to measure those skills in a candidate. Have a single engineer on your team sit down with the candidate and talk about the problem. How would you start the conversation? What problems do you run into along the way? How would you both adapt your solutions to the challenges you encounter? This conversational approach accomplishes two things for DevOps job candidates: It reduces panic. Most people don’t think well under pressure. Plus, you don’t do your job everyday while someone stares over your shoulder, criticizing every typo or mistake. You’d quit that job in an instant. So don’t force people to interview that way. Instead, give your candidates the chance to show off what they can do. You’ll gain insight into how they think and communicate. It mimics real work. The conversational interview gives you an idea of what it would be like to work with this person. You don’t solve hard problems at work by watching each other struggle. (At least, you shouldn’t. Really. That’s not very collaborative or DevOps-y, leaving your colleagues to suffer in their silo.) Instead, you work together, trade ideas, think things through, make mistakes, recover, and find a solution — together. The best whiteboard interviews are collaborative, communicative, and centered around curiosity — all the things that practitioners love about DevOps. Offer take-home tests to DevOps job candidates An alternative to a more traditional whiteboard interview is the take-home test. This type of test is particularly friendly to people who have any kind of anxiety or invisible disability that impacts their ability to participate in a whiteboard interview. This style of interview is also friendly to engineers who struggle intensely with imposter syndrome. Imposter syndrome describes high-achieving individuals who struggle to internalize their successes and experience a persistent feeling of being exposed as a fraud. A take-home test consists of some type of problem that a DevOps candidate can solve at home in their own time. Take-home tests are often set up as a test suite for which the candidate must write code to make the tests pass. Alternatively, the problem could be something relatively small, such as, “Create a program in [your language of choice] that takes an input and reverses the characters.” The options are endless, and you can tailor the test to your tech stack as you see fit. You can even ask DevOps job candidates to deploy their application. Ensure that you allow candidates to use open source tools or provide them with the necessary subscriptions to use particular technologies. The major drawback to take-home tests is that you’re asking people to take time during their evenings or weekends to do what is essentially free work. Even if you pay them for their work on the take-home test, this style of interview can unfairly impact a DevOps candidate who has other responsibilities outside of work, including caring for children, a partner, or ailing parents. Not every great engineer has unlimited time to commit to their craft. But if you limit your DevOps candidate pool to people who can afford to dedicate 5–10 hours to a take-home test, you’ll quickly find your team becoming homogenous and stagnant. Review code with DevOps job candidates One interview technique that can be really telling is when you sit down with an engineer, or a group of engineers, to solve real bugs in real code together. You can take a few approaches to a real-time code interview. You can mimic a take-home test and give the candidate an hour or so to create a program or write a function to make a series of tests pass. You can also stage the interview like a code review in which you pull up an actual PR and dig into what the code is doing as well as what could be improved. In many ways, the pair-programming nature of a code review combines the best parts of both a whiteboard interview and a take-home test — but without some of their major drawbacks. Pair programming is an engineering practice in which two engineers sit down and work through a problem together. Typically, one person “drives” by owning the keyboard, but they collaboratively decide what approach is best, what code to add, and what to take away. If the DevOps position involves an operations-focused role, using this real-time coding approach is even better. Although many ops folks are learning to implement infrastructure as code or manage configurations, they don’t have the same experience as developers. Reviewing what something does and how it might work is a fantastic way to confirm that the candidate has experience in the tools and technologies list on their résumé as well as ensure that the candidate can communicate with a team. Building your DevOps team is an individual pursuit. Your DevOps team does not need to match others you have seen. Evaluate your goals and select the right candidate for each DevOps job.
View ArticleArticle / Updated 11-11-2019
The growth of DevOps culture has changed the way developers build and ship software. Before the Agile mindset emerged, development teams were assigned a feature, built it, and then forgot about it. They tossed the code over to the QA team, who then threw it back because of bugs or moved it along to the operations team. Operations was responsible for deploying and maintaining the code in production. This process was clumsy, to say the least, and it caused quite a bit of conflict. Because teams existed in silos, they had little to no insight into how other teams operated, including their processes and motivations. CI/CD, which stands for continuous integration and continuous delivery (or deployment), aims to break down the walls that have historically existed between teams and instead institute a smoother development process. Benefits of continuous integration and continuous delivery CI/CD offers many benefits. However, the process of building a CI/CD pipeline can be time consuming, plus it requires buy-in from the team and executive leadership. Some benefits of CI/CD include: Thorough automated testing: Even the most simple implementation of CI/CD requires a robust test suite that can be run against the code every time a developer commits their changes to the main branch. Accelerated feedback loop: Developers receive immediate feedback with CI/CD. Automated tests and event integrations will fail before new code is merged. This means that developers can shorten the development cycle and deploy features faster. Decreased interpersonal conflict: Automating processes and reducing friction between teams encourages a more collaborative work environment in which developers do what they do best: engineer solutions. Reliable deploy process: Anyone who’s rolled back a deploy on a Friday afternoon can tell you how important it is that deploys go smoothly. Continuous integration ensures that code is well tested and performs reliably in a production-like environment before it ever reaches an end user. Implementing continuous integration and continuous delivery CI/CD is rooted in agile methodologies. You should think of implementing CI/CD as an iterative process. Every team can benefit from a version of CI/CD, but customizing the overall philosophy will depend heavily on your current tech stack (the languages, frameworks, tools, and technology you use) and culture. Continuous integration Teams that practice continuous integration (CI) merge code changes back into the master or development branch as often as possible. CI typically utilizes an integration tool to validate the build and run automated tests against the new code. The process of CI allows developers on a team to work on the same area of the codebase while keeping changes minimal and avoiding massive merge conflicts. To implement continuous integration: Write automated tests for every feature. This prevents bugs from being deployed into the production environment. Set up a CI server. The server monitors the main repository for changes and triggers the automated tests when new commits are pushed. Your CI server should be able to run tests quickly. Update developer habits. Developers need to merge changes back into the main codebase frequently. At a minimum, this merge should happen once a day. Continuous delivery Continuous delivery is a step up from CI in that developers treat every change to the code as deliverable. However, in contrast to continuous deployment, a release must be triggered by a human, and the change may not be immediately delivered to an end user. Instead, deployments are automated and developers can merge and deploy their code with a single button. By making small, frequently delivered iterations, the team ensures that they can easily troubleshoot changes. After the code passes the automated tests and is built, the team can deploy the code to whatever environment they specify, such as QA or staging. Often, a peer manually reviews code before an engineer merges it into a production release branch. To implement continuous delivery: Have a strong foundation in CI. The automated test suite should grow in correlation with feature development, and you should add tests every time a bug is reported. Automate releases. A human still initiates deployments, but the release should be a one-step process — a simple click of a button. Consider feature flags. Feature flags hide incomplete features from specific users, ensuring that your peers and customers see only the functionality you desire. Continuous deployment Continuous deployment takes continuous delivery even one step further than continuous delivery. Every change that passes the entire production release pipeline is deployed. That’s right: The code is put directly into production. Continuous deployment eliminates human intervention from the deployment process and requires a thoroughly automated test suite. To implement continuous deployment: Maintain a strong testing culture. You should consider testing to be a core part of the development process. Document new features. Automated releases should not outpace API documentation. Coordinate with other departments. Involve departments like marketing and customer success to ensure a smooth rollout process.
View Article