Articles From Arthur J. Deane
Filter Results
Article / Updated 12-20-2023
When studying for the CCSP exam, you must consider how to implement data security technologies and design data security strategies that fit your business and security needs. The following technologies are commonly applied as part of a comprehensive data security strategy in the cloud: Encryption and key management Hashing Data loss prevention (DLP) Data de-identification (by masking and data obfuscation) Tokenization Encryption and key management As encryption pertains to cloud data security, encryption and key management are critical topics that must be fully understood in order to pass the CCSP exam. With resource pooling (and multitenancy) being a key characteristic of cloud computing, it’s important to remember that physical separation and protections are not commonly available in cloud environments. As such, strategic use of encryption is crucial to ensuring secure data storage and use in the cloud. When designing or implementing encryption technologies, remember that an encryption architecture has three basic components: The data being secured The encryption engine that performs all encryption operations The encryption keys used to secure the data While it would seem like encrypting everything would be the best way to ensure data security, it’s important to consider that encryption has a performance impact on systems; system resources are used in order to process encryption algorithms every time data is encrypted or decrypted, which can add up if encryption is used excessively. As a CCSP, it is up to you to implement encryption so that data is as secure as possible while minimizing the impact to system performance. Countless other challenges and considerations exist when implementing encryption technologies, both on-prem and in cloud environments. Some key cloud encryption challenges are Almost all data processing requires that data is in an unencrypted state. If a cloud customer is using a CSP for data analysis or processing, then encryption can be challenging to implement. Encryption keys are cached in memory when in use and often stay there for some time. This consideration is a major point of in multitenant environments because memory is a shared resource between tenants. CSPs must implement protections against tenants’ keys being accessed by tenants who share the same resources. Cloud data is often highly replicated (for availability purposes), which can make encryption and key managing challenging. Most CSPs have mechanisms in place to ensure that any copies of encrypted data remain encrypted. Throughout the entire data lifecycle, data can change states, locations, and format, which can require different applications of encryption along the way. Managing these changes may be a challenge, but understanding the Cloud Secure Data Lifecycle can help design complete end-to-end encryption solutions. Encryption is a confidentiality control at heart. It does not address threats to integrity of data on its own. Other technologies discussed throughout this chapter should be implemented to address integrity concerns. The effectiveness of an encryption solution is dependent upon how securely the encryption keys are stored and managed. As soon as an encryption key gets into the wrong hands, all data protected with that key is compromised. Keys that are managed by the CSP may potentially be accessed by malicious insiders, while customer-managed encryption keys are often mishandled or mismanaged. As the last point indicates, key management is a huge factor in ensuring that encryption implementations effectively secure cloud data. Because of its importance and the challenges associated with key management in the cloud, this task is typically one of the most complicated ones associated with securing cloud data. When developing your organization’s encryption and key management strategy, it’s important that you consider the following: Key generation: Encryption keys should be generated within a trusted, secure cryptographic module. FIPS 140-3 validated modules have been tested and certified to meet certain requirements that demonstrate tamper resistance and integrity of encryption keys. Key distribution: It’s important that encryption keys are distributed securely to prevent theft or compromise during transit. One best practice is to encrypt keys with a separate encryption key while distributing to other parties (in PKI applications, for example). The worst thing that could happen is sending out a bunch of “secret” keys that get stolen by malicious eavesdroppers! Key storage: Encryption keys must be protected at rest (both in volatile and persistent memory) and should never be stored in plaintext. Keys may be stored and managed internally on a virtual machine or other integrated application, externally and separate from the data itself, or managed by a trusted third party that provides key escrow services for secure key management. A Hardware Security Module (HSM) is a physical device that safeguards encryption keys. Many cloud providers provide HSM services, as well as software-based HSM capabilities. Key destruction or deletion: At the end of the encryption key’s lifecycle, there will be a time that the key is no longer needed. Key destruction is the removal of an encryption key from its operational location. Key deletion takes it a step further and also removes any information that could be used to reconstruct that key. To prevent a Denial of Service due to unavailable keys, deletion should only occur after an archival period that includes substantial analysis to ensure that the key is in fact no longer needed. Cloud environments rely heavily on encryption throughout the entire data lifecycle. While encryption itself is used for confidentiality, the widespread use of encryption means that availability of the encryption keys themselves is a major concern. Pay close attention to availability as you’re designing your key management systems and processes. Hashing Hashing, as depicted, is the process of taking an arbitrary piece of data and generating a unique string or number of fixed-length from it. Hashing can be applied to any type of data — documents, images, database files, virtual machines, and more. Hashing in a data structure provides a mechanism to ensure the integrity of data. Hashes are similar to human fingerprints, which can be used to uniquely identify a single person to whom that fingerprint belongs. As seen, even the slightest change to a large text file will noticeably change the output of the hashing algorithm. Hashing is incredibly useful when you want to be sure that what you’re looking at now is the same as what you created before. In cloud environments, hashing helps verify that virtual machine instances haven’t been modified (maliciously or accidentally) without your knowledge. Simply hash your VM image before running it and compare it to the hash of the known-good VM image; the hash outputs should be identical. The term hashing is sometimes used interchangeably with encryption, but they are very different! Encryption is a two-way function, meaning what can be encrypted can be decrypted. Conversely, hashing is a one-way function. You can only generate a hash of an object; you cannot retrieve an object from its hash. Encryption, again, is used to provide confidentiality, while hashing provides integrity checking. Be careful not to confuse these two terms! Several hashing algorithms are available, but the SHA (Secure Hash Algorithm) family of algorithms are amongst the most popular. Specific algorithms are outside the scope of this book, but you can research SHA-1, SHA-2, and SHA-3 for additional context. Data Loss Prevention (DLP) Data loss prevention (DLP), also known as data leakage prevention, is the set of technologies and practices used to identify and classify sensitive data, while ensuring that sensitive data is not lost or accessed by unauthorized parties. DLP can be applied to help restrict the flow of both structured and unstructured data to authorized locations and users. Effective use of DLP goes a long way to helping organizations safeguard their data’s confidentiality, both on-prem and in the cloud. To put it plainly, DLP analyzes data storage, identifies sensitive data components, and prevents users from accidentally or maliciously sending that sensitive data to the wrong party. When designing a DLP strategy, organizations must consider how the technology fits in with their existing technologies, processes, and architecture. DLP controls need to be thoroughly understood and applied in a manner that aligns with the organization’s overall enterprise architecture in order to ensure that only the right type of data is blocked from being transmitted. Hybrid cloud users, or users that utilize a combination of cloud-based and on-prem services, should pay extremely close attention to their enterprise security architecture while developing a DLP strategy. Because data traverses both cloud and noncloud environments, a poor DLP implementation can result in segmented data security policies that are hard to manage and ineffective. DLP that is incorrectly implemented can lead to false-positives (for example, blocking legitimate traffic) or false-negatives (allowing sensitive data to be sent to unauthorized parties). DLP implementations consist of three core components or stages: Discovery and classification: The first stage of DLP is discovery and classification. Discovery is the process of finding all instances of data, and classification is the act of categorizing that data based on its sensitivity and other characteristics. Examples of classifications may include “credit card data,” “Social Security numbers,” “health records,” and so on. Comprehensive discovery and proper classification is crucial to success during the remaining DLP stages. Monitoring: After data has been fully discovered and classified, it is able to be monitored. Monitoring is an essential component of the DLP implementation and involves watching data as it moves throughout the cloud data lifecycle. The monitoring stage is where the DLP implementation is looking to identify data that is being misused or handled outside of established usage policies. Effective DLP monitoring should happen on storage devices, networking devices, servers, workstations, and other endpoints — and it should evaluate traffic across all potential export routes (email, Internet browsers, and so on). Enforcement: The final DLP stage, enforcement, is where action is taken on policy violations identified during the monitoring stage. These actions are configured based on the classification of data and the potential impact of its loss. Violations of less sensitive data is traditionally logged and/or alerted on, while more sensitive data can actually be blocked from unauthorized exposure or loss. A common use-case here is financial services companies that detect credit card numbers being emailed to unauthorized domains and are able to stop the email in its tracks, before it ever leaves the corporate network. Always remember “Security follows the data” — and DLP technology is no different. When creating a DLP implementation strategy, it’s important that you consider techniques for monitoring activity in every data state. DLP data states are DLP at rest: For data at rest, the DLP implementation is stored wherever the data is stored, such as a workstation, file server, or some other form of storage system. Although this DLP implementation is often the simplest, it may need to work in conjunction with other DLP implementations to be most effective. DLP in transit: Network-based DLP is data loss prevention that involves monitoring outbound traffic near the network perimeter. This DLP implementation monitors traffic over Hypertext Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure (HTTPS), File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP), and other protocols. If the network traffic being monitored is encrypted, you will need to integrate encryption and key management technologies into your DLP solution. Standard DLP implementations cannot effectively monitor encrypted traffic, such as HTTPS. DLP in use: Host-based, or endpoint-based, DLP is data loss prevention that involves installation of a DLP application on a workstation or other endpoint device. This DLP implementation allows monitoring of all data in use on the client device and provides insights that network-based DLP are not able to provide. Because of the massive scale of many cloud environments, host-based DLP can be a major challenge. There are simply too many hosts and endpoints to monitor without a sophisticated strategy that involves automated deployment. Despite this challenge, host-based DLP is not impossible in the cloud, and CSPs continue to make monitoring easier as new cloud-native DLP features become available. After you understand DLP and how it can be used to protect cloud data, there are a few considerations that cloud security professionals commonly face when implementing cloud-based DLP: Cloud data is highly distributed and replicated across locations. Data can move between servers, from one data center to another, to and from backup storage, or between a customer and the cloud provider. This movement, along with the data replication that ensures availability, present challenges that need to be worked through in a DLP strategy. DLP technologies can impact performance. Host-based DLP scan all data access activities on an endpoint, and network-based DLP scan all outbound network traffic across a network boundary. This constant monitoring and scanning can impact system and network performance and must be considered while developing and testing your DLP strategy. Cloud-based DLP can get expensive. The pay-for-what-you-use model is often a great savings to cloud customers, but when it comes to DLP, the constant resource utilization associated with monitoring traffic can quickly add up. It’s important to model and plan for resource consumption costs on top of the costs of the DLP solution itself. Data de-identification Confidentiality is incredibly important, especially in the cloud. While mechanisms like encryption and DLP go a long way to providing data confidentiality, they’re not always feasible. Data de-identification (or anonymization) is the process of removing information that can be used to identify a specific individual from a dataset. This technique is commonly used as a privacy measure to protect Personally Identifiable Information (PII) or other sensitive information from being exposed when an entire dataset is shared. The following figure depicts the purest form of data de-identification; in this example, student names have been removed in order to protect the confidentiality of their grades. Several techniques are available to de-identify sensitive information; masking (or obfuscation) and tokenization are two of the most commonly used methods. Masking Masking is the process of partially or completely replacing sensitive data with random characters or other nonsensitive data. Masking, or obfuscation, can happen in a number of ways, but the following figure is a visual depiction of the most popular type of data masking, which is commonly used to protect credit card numbers and other sensitive financial information. As a cloud security professional, you can use several techniques when masking or obfuscating data. Here are a few to remember: Substitution: Substitution mimics the look of real data, but replaces (or appends) it with some unrelated value. Substitution can either be random or algorithmic, with the latter allowing two-way substitution — meaning if you have the algorithm, then you can retrieve the original data from the masked dataset. Scrambling: Scrambling mimics the look of real data, but simply jumbles the characters into a random order. For example, a customer’s whose account number is #5551234 may be shown as #1552435 in a development environment. (For what it’s worth, my scrambled phone number is 0926381135.) Deletion or nulling: This technique is just what it sounds like. When using this masking technique, data appears blank or empty to anyone who isn’t authorized to view it. Aside from being used to comply with regulatory regulations (like HIPAA or PCI DSS), data masking is often used when organizations need to use production data in a test or development environment. By masking the data, development environments are able to use real data without exposing sensitive data elements to unauthorized viewers or less secure environments. Tokenization Tokenization is the process of substituting a sensitive piece of data with a nonsensitive replacement, called a token. The token is merely a reference back to the sensitive data, but has no meaning or sensitivity on its own. The token maintains the look and feel of the original data and is mapped back to the original data by the tokenization engine or application. Tokenization allows code to continue to run seamlessly, even with randomized tokens in place of sensitive data. Tokenization can be outsourced to external, cloud-based tokenization services (referred to as tokenization-as-a-service). When using these services, it’s prudent to understand how the provider secures your data both at rest and in transit between you and their systems.
View ArticleArticle / Updated 12-20-2023
Domain 3, which includes cloud platform and infrastructure security, represents 17 percent of the CCSP certification exam. Virtualization is the process of creating software instances of actual hardware. VMs, for example, are software instances of actual computers. Software-Defined Networks are virtualized networks. Nowadays, you can pretty much find any traditional hardware available as a virtualized solution. Virtualization is the secret sauce behind cloud computing, as it allows a single piece of hardware to be shared by multiple customers. Concepts like multitenancy and resource pooling would not exist as they do today — and you wouldn’t be reading this book — if it weren’t for the advent of virtualization! Virtualization offers many clear benefits. Following is a list of some of the most noteworthy: Increases scalability: Virtualized environments are designed to grow as your demand grows. Instead of buying new hardware, you simply spin up additional virtual instances. Allows faster resource provisioning: It’s much quicker and easier to spin up virtualized hardware from a console than it is to physically boot-up multiple pieces of hardware. Reduces downtime: Restoring or redeploying physical hardware takes a lot of time, especially at scale. Failover for virtualized resources can happen much more quickly, which means your systems remain up and running longer. Avoids vendor lock-in: Virtualization abstracts software from hardware, meaning your virtualized resources are more portable than their physical counterparts. Unhappy with your vendor? Pack up your VMs and move to another one! Saves time (and money): Virtualized resources can be easily centrally managed, reducing the need for personnel and equipment to maintain your infrastructure. In addition, less hardware usually means less money. The preceding list reiterates why virtualization is such a critical technology and reminds you of the deep connection between virtualization and cloud computing. The most common implementation of virtualization is the hypervisor. A hypervisor is a computing layer that allows multiple guest Operating Systems to run on a single physical host device. The following figure shows an overview of hypervisor architecture. The hypervisor abstracts software from hardware and allows each of the guest OSes to share the host’s hardware resources, while giving the guests the impression that they’re all alone on that host. The two categories of hypervisors are Type 1 and Type 2. A Type 1 hypervisor is also known as a bare metal hypervisor, as it runs directly on the hardware. Type 2 hypervisors, however, run on the host’s Operating System. This figure shows a comparison of the two. Despite all the advantages of virtualization, and hypervisors specifically, you, as a CCSP candidate, should remember some challenges: Hypervisor security: The hypervisor is an additional piece of software, hardware, or firmware that sits between the host and each guest. As a result, it expands the attack surface and comes with its own set of vulnerabilities that the good guys must discover and patch before the bad guys get to them. If not fixed, hypervisor flaws can lead to external attacks on VMs or even VM-to-VM attacks, where one cloud tenant can access or compromise another tenant’s data. VM security: Virtual machines are nothing more than files that sit on a disk or other storage mechanism. Imagine your entire home computer wrapped up into a single icon that sits on your desktop — that’s pretty much what a virtual machine comes down to. If not sufficiently protected, a VM image is susceptible to compromise while dormant or offline. Use controls like Access Control Lists (ACLs), encryption, and hashing to protect the confidentiality and integrity of your VM files. Network security: Network traffic within virtualized environments cannot be monitored and protected by physical security controls, such as network-based intrusion detection systems. You must select appropriate tools to monitor inter- and intra-VM network traffic. The concept of virtual machine introspection (VMI) allows a hypervisor to monitor its guest Operating Systems during runtime. Not all hypervisors are capable of VMI, but it’s a technique that can prove invaluable for securing VMs during operation. Resource utilization: If not properly configured, a single VM can exhaust a host’s resources, leaving other VMs out of luck. Resource utilization is where the concept of limits (discussed in the “Reservations, limits, and shares” section of this chapter) comes in handy. It’s essential that you manage VMs as if they share a pool of resources — because they do!
View ArticleArticle / Updated 12-20-2023
The Secure Software Development Lifecycle process is covered under Domain 4, which represents 17 percent of the CCSP certification exam. Streamlined and secure application development requires a consistent methodology and a well-defined process of getting from concept to finished product. SDLC is the series of steps that is followed to build, modify, and maintain computing software. Business requirements Your organization’s business requirements should be a key consideration whenever you develop new software or even when you modify existing applications. You should make sure that you have a firm understanding of your organization’s goals (overall and specific to your project) and knowledge of the end-user’s needs and expectations. It’s important to gather input from as many stakeholders as possible as early as possible to support the success of your application. Gathering requirements from relevant leaders and business units across your organization is crucial to ensuring that you don’t waste development cycles on applications or features that don’t meet the needs of your business. These business requirements are a critical input into the SDLC. SDLC phases While the SDLC process has multiple different variations, it most commonly includes the steps, or phases: Planning Defining Designing Developing Testing Deploying and maintaining There’s a good chance that you will see at least one question related to the SDLC on your exam. Remember that the titles of each phase may vary slightly from one methodology to the next, but make sure that you have a strong understanding of the overall flow and the order of operations. Although none of the stages specifically reference security, it is important that you consider security at each and every step of the SDLC process. Waiting until later stages of the process can introduce unnecessary security risks, which can add unforeseen costs and extend your project timeline. SDLC Planning phase The Planning phase is the most fundamental stage of the SDLC and is sometimes called Requirements Gathering. During this initial phase, the project scope is established and high-level requirements are gathered to support the remaining lifecycle phases. The project team should work with senior leadership and all project stakeholders to create the overall project timeline and identify project costs and resources required. During the Planning phase, you must consider all requirements and desired features and conduct a cost-benefit analysis to determine the potential financial impact versus the proposed value to the end-user. Using all the information that you gather during this phase, you should then validate the economic and technical feasibility of proceeding with the project. The Planning phase is where risks should initially be identified. Your project team should consider what may go wrong and how you can mitigate, or lower, the impact of those risks. For example, imagine that you’re building an online banking application. As part of the Planning phase, you should not only consider all functional requirements of such an application, but also security and compliance requirements, such as satisfying PCI DSS. Consider what risks currently exist within your organization (or your cloud environment) that might get in the way of demonstrating PCI DSS and then plan ways to address those risks. SDLC Defining phase You may also see this phase referred to as Requirements Analysis. During the Defining phase, you use all the business requirements, feasibility studies, and stakeholder input from the Planning phase to document clearly defined product requirements. Your product requirements should provide full details of the specific features and functionality of your proposed application. These requirements will ultimately feed your design decisions, so it needs to be as thorough as possible. In addition, during this phase you must define the specific hardware and software requirements required for your development team — identify what type of dev environment is needed, designate your programming language, and define all technical resources needed to complete the project. This phase is where you should specifically define all your application security requirements and identify the tools and resources necessary to develop those accordingly. You should be thinking about where encryption is required, what type of access control features are needed, and what requirements you have for maintaining your code’s integrity. SDLC Designing phase The Designing phase is where you take your product requirements and software specifications and turn them into an actual design plan, often called a design specification document. This design plan is then used during the next phase to guide the actual development and implementation of your application. During the Designing phase, your developers, systems architects, and other technical staff create the high-level system and software design to meet each identified requirement. Your mission during this phase is to design the overall software architecture and create a plan that identifies the technical details of your application’s design. In cloud development, this phase includes defining the required amount of CPU cores, RAM, and bandwidth, while also identifying which cloud services are required for full functionality of your application. This component is critical because it may identify a need for your organization to provision additional cloud resources. Your design should define all software components that need to be created, interconnections with third-party systems, the front-end user interface, and all data flows (both within the application and between users and the application). At this stage of the SDLC, you should also conduct threat modeling exercises and integrate your risk mitigation decisions (from the Planning phase) into your formal designs. In other words, you want to fully identify potential risks. SDLC Developing phase Software developers, rejoice! After weeks or even months of project planning, you can finally write some code! During this phase of the SDLC, your development team breaks up the work documented in previous steps into pieces (or modules) that are coded individually. Database developers create the required data storage architecture, front-end developers create the interface that users will interact with, and back-end developers code all the behind-the-scenes inner-workings of the application. This phase is typically the longest of the SDLC, but if the previous steps are followed carefully, it can be the least complicated part of the whole process. During this phase, developers should conduct peer reviews of each other’s code to check for flaws, and each individual module should be unit tested to verify its functionality prior to being rolled into the larger project. Some development teams skip this part and struggle mightily to debug flaws once an application is completed. In addition to conducting functional testing of each module, the time is right to begin security testing. Your organization should conduct static code analysis and security scanning of each module before integration into the project. Failure to do so may allow individual software vulnerabilities to get lost in the overall codebase, and multiple individual security flaws may combine to present a larger aggregate risk, or combined risk. SDLA Testing phase Once the code is fully developed, the application enters the Testing phase. During this phase, application testers seek to verify whether the application functions as desired and according to the documented requirements; the ultimate goal here is to uncover all flaws within the application and report those flaws to the developers for patching. This cyclical process continues until all product requirements have been validated and all flaws have been fixed. As a completed application, security testers have more tools at their disposal to uncover security flaws. Instead of relying solely on static code analysis, testers can use dynamic analysis to identify flaws that occur only when the code is executed. The Testing phase is one of the most crucial phases of the SDLC, as it is the main gate between your development team and customers. Testing should be conducted in accordance with an application testing plan that identifies what and how to test. Management and relevant stakeholders should carefully review and approve your testing plan before testing begins. Deploying and maintaining Once the application has passed the Testing phase, it is ready to be deployed for customer use. There are often multiple stages of deployment (Alpha, Beta, and General Availability are common ones), each with its own breadth of deployment (for example, alpha releases tend to be deployed to select customers, whereas general availability means it’s ready for everyone). Once applications have been tested and successfully deployed, they enter a maintenance phase where they’re continually monitored and updated. During the Maintaining phase, the production software undergoes an ongoing cycle of the SDLC process, where security patches and other updates go through the same planning, defining, designing, developing, testing, and deploying activities discussed in the preceding sections. Many SDLC models include a separate phase for disposal or termination, which happens when an application is no longer needed or supported. From a security perspective, you should keep in mind that data (including portions of applications) may remain in cloud environments even after deletion. Consult your contracts and SLAs for commitments that your CSP makes for data deletion. Methodologies Although the steps within the SDLC remain largely constant, several SDLC methodologies, or models, exist, and each approaches these steps in slightly different ways. Two of the most commonly referenced and used methodologies are waterfall and agile. Waterfall Waterfall is the oldest and most straightforward SDLC methodology. In this model, you complete one phase and then continue on to the next — you move in sequential order, flowing through every step of the cycle from beginning to end. Each phase of this model relies on successful completion of the previous phase; there’s no going back, because... well, because waterfalls don’t flow up. Some advantages of the waterfall methodology include It’s simple to manage and easy to follow. Tracking and measuring progress is easy because you have a clearly defined end state early on. The measure twice, cut once approach allows applications to be developed based upon a more complete understanding of all requirements and deliverables from the start. The process can largely occur without customer intervention after requirements are initially gathered. Customers and developers agree on desired outcomes early in the project. Some challenges that come with waterfall include It’s rigid. Requirements must be fully developed early in the process and are difficult to change once the design has been completed. Products may take longer to deliver compared to more iterative models, like agile. It relies very little on the customer or end-user, which may make some customers feel left out. Testing is delayed until late in the process, which may allow small issues to build up into larger ones before they’re detected. Agile Agile is more of the new kid on the block, having been introduced in the 1990s. In this model, instead of proceeding in a linear and sequential fashion, development and testing activities occur simultaneously and cyclically. Application development is separated into sprints that produce a succession of releases that each improves upon the previous release. With the agile model, the goal is to move quickly and to fail fast — create your first release, test it, fix it, and create your next release fast! Some advantages of the agile methodology include It’s flexible. You can move from one phase to the next without worrying that the previous phase isn’t perfect or complete. Time to market is much quicker than waterfall. It’s very user-focused; the customer has frequent opportunities to give feedback on the application. Risks may be reduced because the iterative nature of agile allows you get feedback and conduct testing early and often. Some challenges that come with Agile include It can be challenging to apply in real-life projects, especially larger projects with many stakeholders and components. The product end-state is less predictable than waterfall. With agile, you iterate until you’re happy with the result. It requires a very high level of collaboration and frequent communication between developers, customers, and other stakeholders. This challenge can be a pro, but sometimes has a negative impact on developers and project timelines.
View ArticleCheat Sheet / Updated 12-20-2023
The Certified Cloud Security Professional (CCSP) credential is based upon a Common Body of Knowledge (CBK) jointly developed by the International Information Systems Security Certification Consortium (ISC)2 and the Cloud Security Alliance (CSA). The CBK (and the associated exam) includes six domains that cover separate, but interrelated, areas: Cloud Concepts, Architecture and Design; Cloud Data Security; Cloud Platform & Infrastructure Security; Cloud Application Security; Cloud Security Operations; and Legal, Risk and Compliance. A ton of information is in these domains, and you can use this Cheat Sheet to remember some of the most important parts.
View Cheat SheetArticle / Updated 10-27-2020
There’s more to successfully passing the CCSP exam than reading a test-prep book. Here are some tips to help you prepare for the exam — from the start of your journey until test day. Brush up on the prerequisites Cloud Computing and Information Security are two topics that involve a great deal of knowledge from different fields within Information Technology. It stands to reason, then, that mastering the field of Cloud Security requires knowledge about lots of technical (and even nontechnical) topics. Before studying for the CCSP exam, you should make sure you have a grasp of the fundamental prerequisites. In addition, you should brush up on networking (TCP/IP, routing, switching, etc.) and consider exploring the fundamentals of some of the bigger cloud providers (like Amazon Web Services, Google Cloud Platform, and Microsoft Azure). Register for the exam It may sound trivial, but registering for the CCSP exam is actually one of the best things you can do to prepare for the exam. By selecting and committing to an exam date early on, you give yourself a fixed target to keep in mind as you study. Having this date marked on your calendar as soon as possible helps prevent procrastination and also supports you in establishing a realistic study plan and goals. When registering for the exam, make sure that you first assess how much of the exam material you know and how much you need to learn. Consider your obligations between now and the potential exam day and make sure that the date you pick is realistic for your schedule. In addition to giving you strong motivation (like $599 worth!) to study hard, registering for the exam early is a good idea to ensure that you get the date and time that works best for you. While Pearson VUE generally has multiple test centers and several time slots to choose from, availability can vary from city to city and based on the time of year. Once you’re sure that you want to take the CCSP, go online to find your nearest test center and get registered. Create a study plan Create a study plan and commit to sticking to it. Depending on your knowledge level and amount of professional experience with the CCSP domains, I usually recommend between a 60- and 90-day study plan; anything shorter is likely to be too aggressive, while anything greater than 90 days often tends to lead to less intensive studying than required. When creating your study plan, be sure to take into account your work schedule, holidays, travel plans, and anything else that may get in the way of intensive studying. The most important factor of a good study plan is that it is realistic — otherwise, you’re setting yourself up for failure. How granular you get with your study plan is up to you and depends on your need for more or fewer milestones. In general, I recommend breaking study plans up into weekly objectives, but some people prefer daily targets to more regularly hold themselves accountable. Whatever you choose, make sure to allocate enough time to get through all exam material before your exam date. For some, enough time may mean two hours of studying per day, while it’s perfectly normal for CCSP candidates to spend four to six hours per day studying. Find a study buddy Having someone to study with can make the task of preparing for the exam much easier. Maybe you know someone who’s already studying for the CCSP, or perhaps you have friends or colleagues who would benefit from the exam. If you’re able to pair up for some of your study sessions, you should do it. If a traditional study partner isn’t available, finding an accountability partner is a solid alternative. The objective here is to have someone you trust to check in during your CCSP journey and another ear to vent to when the going gets tough. Take practice exams One of the best ways to prepare for the CCSP exam is to practice with questions and exams that have similar questions. While no practice exams completely mirror the CCSP exam, several resources are available for you to practice and assess your CCSP readiness. Stay clear of so-called exam dumps or brain dumps, which are actual CCSP exam questions that have been posted on the Internet. Not only does this method violate (ISC)2 terms, but these dumps are often either out of date or just plain wrong. Stick to trusted sources for your practice questions and exams. Get hands-on Experience really is the best teacher. To qualify for the CCSP cybersecurity certification, you must pass the exam and have at least five years of cumulative paid work experience in information technology, of which three years must be in information security and one year in one or more of the six domains of the (ISC)2 CCSP Common Body of Knowledge (CBK). Aside from being a requirement to get certified, this hands-on experience is the best way to gain practical, real-life experience that translates to the concepts on the exam. Getting started with cloud environments is simple and requires little more than an Internet connection and a credit card. Try setting up your own cloud environment and exploring the security features they offer. You may be surprised how quickly concepts stick when you see them in action. Attend a CCSP training seminar Depending on your learning style, you may benefit from taking an official (ISC)2 CCSP Training Seminar or Bootcamp. These trainings are instructor-led and offered in-person and online. In-person options are five-day courses that cover all six domains within the CCSP exam, while the online training allows more flexible scheduling. These seminars are very rigorous and give you the option to ask questions from a CCSP trainer in real-time. You can find training schedules, costs, and other information in the Education & Training section. Plan your exam strategy It’s a good idea to give some thought to how you’ll approach the exam on your big day. You have three hours to answer 125 questions, which comes out to just under 90 seconds per question. You’ll know the answer to many questions in a fraction of that time, but you should plan ahead for how you’ll approach questions that you don’t immediately know the answers. One strategy is to answer all the easier questions and flag the harder ones for review and answer at the end. The drawback to this approach is you can be left with quite a few challenging questions to answer in a relatively short period of time. Another approach is to use the process of elimination to narrow things down to the two most probable answers; if you can get the toughest questions down to 50/50 chances, you’re likely in good shape. Aside from knowing when to skip questions and when to make educated guesses, you should have a strategy for taking breaks. If you don’t build breaks into your exam strategy, you may forget to take them when the heat is on. Get some rest and relaxation I’ve seen people still studying for certification exams as they’re walking into the examination center. While it’s good to double- or triple- check your knowledge, at some point you either know the information or you don’t. As a general rule, I recommend using the day before the test as your cutoff point and setting your study materials aside 24 hours before the exam. Find something you enjoy doing that doesn’t involve reading technical reference materials or cramming for an exam. Catch up on shows you’ve missed while studying, go out for a bike ride, or hang out with friends and family that probably feel neglected by now! Whatever you do, remain confident in the study plan you created and followed, and find as many ways to relax as possible.
View ArticleArticle / Updated 10-26-2020
Each of the cloud service categories — IaaS, PaaS, and SaaS — all provide access to data storage, but each model uses its own storage types. Each of the service categories and storage types comes with its own specific threats and security considerations. As you design and implement your cloud data storage architecture, you must consider what service category you’re building or implementing and the unique characteristics of data security associated with that service model. Domain 2: Cloud Data Security covers a wide range of technical and operational topics and is the most heavily weighted domain on the CCSP exam, representing 19 percent of the CCSP certification exam. Storage types IaaS uses volume and object storage, PaaS uses structured and unstructured data, while SaaS can use a wide assortment of storage types. IaaS The infrastructure as a service model provides cloud customers with a self-service means to access and manage compute, storage, and networking resources. Storage is allocated to customers on an as-needed basis, and customers are charged only for what they consume. The IaaS service model uses two categories of storage: Volume: A volume is a virtual hard drive that can be attached to a Virtual Machine (VM) and utilized similar to a physical hard drive. The VM Operating System views the volume the same way any OS would view a physical hard drive in a traditional server model. The virtual drive can be formatted with a file system like FAT32 or NTFS and managed by the customer. Examples of volume storage include AWS Elastic Block Store (EBS), VMware Virtual Machine File System (VMFS), and Google Persistent Disk. You may also see volume storage referred to as block storage. The two terms can be used interchangeably. Object: An object is file storage that can be accessed directly through an API or web interface, without being attached to an Operating System. Data kept in object storage includes the object data and metadata and can store any kind of information, including photos, videos, documents and more. Many CSPs have interfaces that present object storage in a similar fashion to standard file tree structures (like a Windows directory), but the files are actually just virtual objects in an independent storage structure that rely on key values to reference and retrieve them. Amazon S3 (Simple Storage Service) and Azure Blob Storage are popular examples of object storage. PaaS Platform as a service storage design differs from IaaS storage design because the cloud provider is responsible for the entire platform (as opposed to IaaS, where the CSP is only responsible for providing the volume allocation) and the customer only manages the application. The PaaS service model utilizes two categories of storage. Structured: Structured data is information that is highly organized, categorized, and normalized. This type of data is able to be placed into a relational database or other storage system that includes rulesets and structure (go figure!) for searching and running operations on the data. Structured Query Language (SQL) is one of the most popular database programming languages used to search and manipulate structured data(bases). Remembering that SQL is a database language is an easy way to associate structured data with databases. Unstructured: Unstructured data is information that cannot be easily organized and formatted for use in a rigid data structure, like a database. Audio files, videos, word documents, web pages, and other forms of text and multimedia fit into this data type. SaaS For software as a service, the cloud provider is responsible for managing not only the entire infrastructure and platform, but also the application itself. For this reason, the cloud user has minimal control over what types of data go into the system; their only data storage responsibility is to put permissible data into the application. While they’re not quite true data types, the SaaS service model commonly utilizes two types (or methods) of data storage: Information storage and management: This type of data storage involves the customer entering data into the application via the web interface, and the application storing and managing that data in a back-end database. Data may also be generated by the application, on behalf of the customer, and similarly stored and managed. This application-generated data is internally stored on volume or object storage, but is hidden from the customer. Content and file storage: With this type of data storage, the customer uploads data through the web application, but instead of being stored in an integral database, the content and files are stored in other storage mechanisms that users can access. Some other terms you should be familiar with are ephemeral and raw-disk storage. Ephemeral storage is temporary storage that accompanies more permanent storage. Ephemeral storage is useful for temporary data, such as buffers, caches, and session information. Raw-disk storage is storage that allows data to be accessed directly at the byte level, rather than through a filesystem. You may or may not be tested on this information, but you’re likely to come across the terms at some point. Threats to storage types The ultimate threat to any storage type is a compromise that impacts the confidentiality, integrity, or availability of the data being stored. While specific attack vectors vary based on the storage type, the following list identifies some common threats to any type of data storage: Unauthorized access or usage: This type of threat involves the viewing, modification, or usage of data stored in any storage type by either an external unauthorized party or a malicious insider who may have credentials to the environment but who uses them in an unauthorized manner. The attack vectors from external threat actors can be anything from using malware to gain escalated privileges to using phishing techniques to steal credentials from users who have credentials to access data. To protect against insider threats related to unauthorized access and usage, CSPs should have mechanisms and processes in place to require multiple parties to approve access to customer data, where possible. Mechanisms should also be in place to detect access to customer data and processes to validate that the access was legitimate. Cloud customers should consider using Hardware Security Modules (HSMs) wherever possible, to help control access to their data by managing their own encryption keys. Data leakage and exposure: The nature of cloud computing requires data to be replicated and distributed across multiple locations, often around the world. This fact increases threats associated with data leakage, if cloud providers don’t pay careful attention to how replicated data is protected. Customers want to know that their data is secured consistently across locations, not only for peace of mind against leakage, but also for regulatory compliance purposes. Denial of Service: DoS and DDoS attacks are a huge threat to the availability of data stored within cloud storage. Cloud networks that are not resilient may face challenges handling sudden spikes in bandwidth, which can result in authorized users not being able to access data when they need it. Corruption or loss of data: Corruption or loss of data can affect the integrity and/or availability of data and may impact specific data in storage or the entire storage array. These threats can occur by intentional or accidental means, including technical failures, natural disasters, human error, or malicious compromises (for example, a hack). Redundancy within cloud environments helps prevent complete loss of data, but cloud customers should carefully read CSPs’ data terms that include availability and durability SLOs and SLAs. Durability (or reliability) is the concept of using data redundancy to ensure that data is not lost, compromised, or corrupted. The term has been used for years in traditional IT circles and is just as important in cloud security. Durability differs from availability in that availability focuses on uptime through hardware redundancy. It’s very possible (but not desirable) to have a system that stays up 100 percent of the time, but all of the data within it is corrupted. The goal of a secure cloud environment is, of course, to have as close to 100 percent availability (uptime) and durability (reliability). Despite this lofty goal, CSPs’ actual commitment for each tends to be 99 percent followed by some number of 9s (like 99.999999 percent).
View ArticleArticle / Updated 10-26-2020
You can rely on many of the same principles from traditional IT models when designing secure cloud computing environments, but the cloud does present additional considerations. For the CCSP exam, remember that cloud computing comes with its own set of benefits and challenges in managing the data lifecycle, disaster recovery, and business continuity planning. In addition, you must consider unique factors when conducting cost benefit analysis to determine whether moving to the cloud makes sense for your organization. Cloud Secure Data Lifecycle I often use the phrase “security follows the data” when describing data security. What I mean by this phrase is that keeping data secure requires an understanding of where the data is, both physically and in its lifecycle. The figure shows the cloud secure data lifecycle, and its steps are described in the following list. Create. Data is either generated from scratch or existing data is modified/updated. Store. Data is saved into a storage system or repository. Use. Data is processed by a user or application. Share. Data is made accessible by other users or systems. Archive. Data is transferred from readily accessible storage to a more long-term, static storage state. Destroy. Data is deleted and removed from storage, making it permanently inaccessible and unusable. Understanding the differences between each of these phases is an important prerequisite to defining data security processes and identifying appropriate security mechanisms. Specific data security controls are dependent upon any regulatory, contractual, and business requirements that your organization must satisfy. Cloud based disaster recovery (DR) and business continuity (BC) planning This section gives you some tips, as a CCSP candidate concerned with business continuity planning and disaster recovery in cloud environments. Some important points to consider are Understand how the shared responsibility model applies to BCP/DR. Understand any supply chain risks that exist (for example, vendor or third-party factors that may impact your ability to conduct BCP/DR activities). Consider the need to keep backups offsite or with another CSP. Ensure that SLAs cover all aspects of BCP/DR that concern your organization, including RPOs, RTOs, points of contact, roles and responsibilities, and so on. Service Level Agreements are tremendously important to consider when planning for business continuity and disaster recovery. SLAs should clearly describe requirements for redundancy and failover, as well as address mitigating single points of failure in the cloud environment. In addition, you should look for clear language on your ability to move to another cloud provider should your DR and BC requirements not be met. While having all of this documented and agreed upon is a must, it’s also important that you periodically review and test that these agreements continue to hold true. Cost benefit analysis Any organization considering moving to the cloud should first conduct a thorough cost-benefit analysis to determine whether the features offered by the cloud justify the costs associated with migrating and maintaining a cloud environment. The following list identifies some factors worth considering, but organizations’ individual cost-benefit analysis may differ, based on their own requirements: Steady versus cyclical demand: One of the essential characteristics of cloud is rapid elasticity. Companies who see cyclical demand stand to benefit the most from cloud computing. Think of a retailer that sees demand go up and down, based on seasonal changes and holidays. When these types of customers own and operate their own data centers, they must purchase and maintain enough resources to support their highest capacity periods (Black Friday, for example). During other times of the year, these customers are still paying to operate facilities that are likely only a fraction in use. Some organizations, however, don’t experience cyclical spikes, and so the equation for them is a little different. Every potential cloud customer must evaluate their own demand trends and determine whether cloud computing offers a financially attractive option for running their workloads. CapEx versus OpEx: One of the biggest changes for companies moving to the cloud is a drastic shift from capital expenditures to operational expenditures. Rather than paying to keep data centers up and running, companies in the cloud carry OpEx costs associated with cloud oversight, management, and auditing. Organizations must evaluate whether their current org structure, business model, and staffing can support this shift. For example, some companies may realize that their current staff is not sufficiently equipped to take on new roles, and the costs of hiring new personnel may be prohibitive to moving to the cloud. Ownership and control: Organizations who own and operate their own hardware maintain full ownership and control over their systems and data; they get to change what they want, when they want. When moving to the cloud, some of this control is traded for convenience and flexibility. While organizations can negotiate favorable contractual terms and SLAs, they will never maintain the same direct control they have with on-prem solutions. While many customers are willing to make this tradeoff, each organization will have to assess their own priorities and determine whether relinquishing some level of control is acceptable. Organizational focus: One of the key benefits of cloud is the fact that organizations can shift their focus from operating systems to overseeing their operation; a difference that can be significant and allow organizations to focus more on their core business rather than managing IT operations. While this benefit is clear, some organizations may not be completely ready to transition their existing operations staff into other roles. Business leaders must evaluate how ready they are for such an organization shift and whether the pros outweigh potential pitfalls. Security considerations for different cloud categories Each cloud category (IaaS, PaaS, and SaaS) will share some similar security concerns, but some considerations will be unique to each category due to the varying levels of responsibility shared between the CSP and customer. IaaS security concerns Due to the nature of IaaS architectures and services, virtualization and hypervisors play a key role as attack vectors, though other security considerations do exist. Some key IaaS security considerations include Colocation and multitenancy: With on-premise solutions, organizations can be certain that their data is physically separate from anyone else’s data. In cloud environments, an organization must assume that they share resources with dozens, hundreds, or even thousands of other organizations. While it is up to the CSP to logically (or virtually) segregate one tenant’s data from another, it is the responsibility of each customer to protect the data they deploy in accordance with any regulatory or contractual requirements they may have. Virtual machine (VM) attacks: Active VMs are susceptible to the same security vulnerabilities as physical servers — and whether active or offline, a compromised VM poses a threat to other VMs that share the same server. This risk is because VMs that reside on the same physical machine share memory, storage, hypervisor access, and other hardware and software resources. The CSP is responsible for preventing, detecting, and responding to malicious activity between VMs. Hypervisor attacks: Compromising the hypervisor can be considered a gold mine for attackers. An exploited hypervisor can yield access to the physical server, all tenant virtual machines, and any hosted applications. Network security: Cloud environments rely on virtual networks that contain virtual switch software that control the movement of traffic within the cloud. These switches, if not properly configured and secured, can be vulnerable to layer 2 attacks, such as ARP poisoning. Because customers do not have the level of control over the network as they would with on-prem solutions, it is up to the CSP to tightly control network activity and be transparent with the customer (within reason) about how their data is protected. Denial of service (DoS) attacks: DoS attacks are a threat in both cloud environments and traditional data center environments. The nature of cloud and the sheer size of many CSPs may make it more challenging to take down a cloud service, but it’s certainly not impossible. If one cloud customer is experiencing a DoS attack, it can potentially impact other customers on the same hypervisor. Even though hypervisors have mechanisms to prevent a single host from consuming 100 percent of a system’s resources, if a single host consumes enough resources, it can leave insufficient compute, memory, and networking for other hosts. PaaS security concerns PaaS services being platform-based, rather than infrastructure-based, present a different set of security considerations. Some key PaaS security considerations include Resource isolation: PaaS tenants generally have extremely little to no system-level access of the underlying environment. This assures resource isolation, and prevents a single customer from impacting multiple customers with infrastructure- or platform-level configurations. If a customer is able to change underlying configurations within the environment, it can negatively impact other customers that share resources, as well as make it very hard for the CSP to effectively manage and secure the environment. User permissions: It’s important that each tenant in the PaaS environment is able to manage their user permissions independently. That is, each instance of the PaaS offering should allow the respective cloud customer to configure their own user-level permissions. User access management: Cloud security professionals must evaluate their organization’s business needs and determine what access model works for them in the cloud. It’s crucial that you find the balance between allowing quick user provisioning with proper and secure authentication and authorization. Cloud environments offer a great deal of power to automate these tasks, thanks to elasticity and autoscaling. Malware, backdoors, and other nasty threats: Autoscaling means that anytime there’s a backdoor or other piece of malware within a cloud environment, it can grow and scale automatically without intervention from a cloud security professional. In addition, these threats can start with one PaaS customer and rapidly expand to other customers, if not detected and eradicated. It’s up to the CSP to continuously monitor for new vulnerabilities and exploits, and it’s the customer’s job to use secure coding best-practices. SaaS security concerns SaaS presents very different security concerns from its infrastructure and platform peers. While most of these concerns will fall on the CSP, it’s important that you are familiar with some key SaaS security considerations: Data comingling: The characteristic of multitenancy means that multiple customers share the same cloud infrastructure. In SaaS deployments, many customers are likely to store their data in applications with shared servers, storage, or even potentially shared databases. This comingling of data means that any cross-site scripting (XSS), SQL injections, or other vulnerabilities can expose not one but potentially all customers in the shared SaaS environment. It’s up to the CSP to ensure that customer data is segregated as much as possible, through use of encryption and other technologies. In addition, the CSP is responsible for conducting vulnerability scans and penetration testing to identify and fix weaknesses before they are exploited. Data access policies: When evaluating a SaaS solution, you should carefully consider your organization’s existing data access policies and evaluate how well they align with the cloud provider’s capabilities. Again, multitenancy means that several customers are sharing the same resources — so the level of data access configuration and customization is potentially limited. As a cloud security professional, you want to be sure that your company’s data is not only protected from other cloud customers, but you’ll also want to ensure that you’re able to control access between different users within your own company (separate access for developers and HR, for example). Web application security: SaaS applications, by nature, are connected to the Internet and are meant to be available at all times. This interconnection means that they are constantly vulnerable to attacks and compromise. Because cloud applications hang their hat on high availability, any exploit that takes down a web app can have a great impact on a cloud customer. Imagine if an organization’s cloud-based payroll systems suddenly went down on pay day!
View ArticleArticle / Updated 10-26-2020
It would be great if you could just do your security magic, and nothing bad would ever happen. Unfortunately, you can’t fix every vulnerability or stop every threat . . . so it’s important that you’re prepared to handle whatever comes your way. For the CCSP exam, you need to know the basics of incident handling. The field of incident handling deals with preparing for, addressing, and recovering from security incidents. NIST SP 800-61 defines the following: An event is any observable occurrence in a system or network. Adverse events are events with negative consequences. A computer security incident is a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard security practices. While the focus in this section is specifically on computer security incidents, keep in mind that these principles apply to power failures, natural disasters, and so on. Though every incident starts as an event (or multiple events), every event is not necessarily an incident. To take the distinction even further, not every event is even considered adverse or negative. Some examples of events include A website visitor downloads a file. A user enters an incorrect password. An Administrator is granted root access to a router. A firewall rejects a connection attempt. A server crashes. Of the preceding events, only the last one is inherently adverse, and without further information, you can’t call any of them incidents. The following list includes some examples of incidents: A hacker encrypts all your sensitive data and demands a ransom for the keys. A user inside your organization steals your customers’ credit card data. An Administrator at your company is tricked into clicking on a link inside a phishing email, resulting in a backdoor connection for an attacker. Your HR system is taken offline by a Distributed Denial of Service (DDoS) attack Make sure you don’t use the terms event and incident interchangeably; they’re not the same. I have heard IT professionals refer to simple events as incidents, and that’s a great way to sound alarms that don’t need to be sounded! Incident handling starts well before an incident even occurs and ends even after things are back to normal. The Incident Response (IR) Lifecycle (shown) describes the steps you take before, during, and after an incident. The key components of the IR Lifecycle are Preparation Detection Containment Eradication Recovery Post-Mortem Preparing for security incidents You can easily overlook preparation as part of the incident response process, but it’s a critical step for the rapid response to and recovery from incidents. This phase of incident handling includes things like Developing an Incident Response Plan. Your Incident Response Plan identifies procedures to follow when an incident occurs, as well as roles and responsibilities of all stakeholders. Periodically testing your Incident Response Plan. Determine your plan’s effectiveness by conducting table-top exercises and incident simulations. Implementing preventative measures to keep the number of incidents as low as possible. This process includes finding vulnerabilities in your systems, conducting threat assessments, and applying a layered approach to security controls (things like network security, host-based security, and so on) to minimize risk. Setting up incident analysis equipment. This process can include forensic workstations, backup media, and evidence-gathering accessories (cameras, notebooks and pens, and storage bags/bins to preserve crucial evidence and maintain chain of custody). Detecting security incidents During this phase, you acknowledge an incident has indeed occurred, and you feverishly put your IR Plan into action. This phase is all about gathering as much information as possible and analyzing it to gain insights into the origin and impact of the breach. Some examples of activities during this phase are Conducting log analysis and seeking unusual behavior. A good Security Information and Event Management (SIEM) tool can help aggregate different log sources and provide more intelligent data for your analysis. You’re looking for the smoking gun, or at least a trail of breadcrumbs, that can alert you to how the attack took place. Identifying the impact of the incident. What systems were impacted? What data was impacted? How many customers were impacted? Notification of appropriate individuals. Your IR Plan should detail who to contact for specific types of incidents. During this phase, you’ll need to enact your communications plan to alert proper teams and stakeholders. Documentation of findings. The situation will likely be frantic, so organization is critical. You want to keep detailed notes of your findings, actions taken, chain of custody, and other relevant information. This activity helps with your post-mortem reporting later on and also helps with keeping track of important details that can assist in tracing the attack back to its origin. In addition, many incidents require reporting to law enforcement or other external parties; thorough notes and a strong chain of custody help support any investigations that may arise. Depending on various factors (the nature of the incident, your industry, or any contractual obligations), you may be responsible for notifying customers, law enforcement, or even US-CERT (part of Department of Homeland Security). Make sure that you keep a comprehensive list of parties to notify in case of a breach. Containing security incidents The last thing you want to deal with during an incident is an even bigger incident. Containment is extremely important to stop the bleeding and prevent further damage. It also allows you to use your incident response resources more efficiently and avoid exhausting your analysis and remediation capacity. Some common containment activities include Disabling Internet connectivity for affected systems Isolating/quarantining malware-infected systems from the rest of your network Reviewing and/or changing potentially compromised passwords Capturing forensic images and memory dumps from impacted systems Eradicating security incidents By the time you reach this phase, your primary mission is to remove the threat from your system(s). Eradication involves eliminating any components of the incident that remain. Depending on the number of impacted hosts, this phase can be fairly short or last for quite some time. Here are some key activities during this phase: Securely removing all traces malware Disabling or recreating impacted user and system accounts Identifying and patching all vulnerabilities (starting with the ones that led to the breach!) Restore known good backups Wipe or rebuild critically damaged systems Recovering from security incidents The objective of the Recovery phase is to bring impacted systems back into your operational environment and fully resume business as usual. Depending on your organization’s IR Plan, this phase may be closely aligned or share steps with the Eradication phase. It can take several months to fully recover from a large-scale compromise, So you need to have both short-term and long-term recovery objectives that align with your organization’s needs. Some common recovery activities include Confirming vulnerabilities have been patched and fully remediated Validating systems are functioning normally Restoring systems to normal operations (for example, reconnecting Internet access, restoring connection to your production network, and so on) Closely monitoring systems for any remaining signs of undesirable activity Conducting a security incident post-mortem After your systems are back up and running and the worst is over, you need to focus your attention on the lessons learned from the incident. The primary objective of the Post-Mortem phase is to document the lessons and implement the changes required to prevent a similar type of incident from happening in the future. All members of the Incident Response Team (and supporting personnel) should meet to discuss what worked, what didn’t work, and what needs to change within the organization moving forward. Here are some questions to consider during Post-Mortem discovery and documentation: What vulnerability (technical or otherwise) did this breach exploit? What could have been done differently to prevent this incident or decrease its impact on your organization? How can you respond more effectively during future incidents? What policies need to be updated, and with what content? How should you train your employees differently? What security controls need to be modified or implemented? Do you have proper funding to ensure you are prepared to handle future breaches? For a more in-depth review of handling security incidents, refer to NIST’s Computer Security Incident Handling Guide (SP 800-61).
View ArticleArticle / Updated 10-26-2020
These core security concepts are crucial to passing the CCSP exam. Discover the most fundamental security topics and begin to set the stage for what you need to know to pass the exam. You need to understand a few foundational principles before embarking on your CCSP journey. The pillars of information security Information security is the practice of protecting information by maintaining its confidentiality, integrity, and availability. These three principles form the pillars of information security, and they’re often referred to as the CIA triad. Although different types of data and systems may prioritize one over the others, the three principles work together and depend on each other to successfully secure your information. After all, you can’t have a triangle with two legs! Confidentiality Confidentiality entails limiting access to data to authorized users and systems. In other words, confidentiality prevents exposure of information to anyone who is not an intended party. If you receive a letter in the mail, the principle of confidentiality means that you’re the intended recipient of that letter; opening and reading someone else's letter violates the principle of confidentiality. The concept of confidentiality is closely related to the security best practice of least privilege, which asserts that access to information should only be granted on a need to know basis. In order to enforce the principle of least privilege and maintain confidentiality, it’s important that you classify (or categorize) data by its sensitivity level. Keep in mind that data classification plays a critical role in ensuring confidentiality. You must know what data you own and how sensitive it is before determining how to protect it and who to protect it from. Privacy is a hot topic that focuses on the confidentiality of personal data. Personal information such as names, birthdates, addresses, and Social Security numbers are referred to as personally identifiable information (PII). Integrity Integrity maintains the accuracy, validity, and completeness of information and systems. It ensures that data is not tampered with by anyone other than an authorized party for an authorized purpose. If your mail carrier opens your mail, destroys the letter inside, and seals it back up — well, you have a pretty mean mail carrier! In addition to not being a very nice person, your mail carrier has violated the principle of integrity: The letter did not reach the intended audience (you) in the same state that the sender sent it. A checksum is a value derived from a piece of data that uniquely identifies that data and is used to detect changes that may have been introduced during storage or transmission. Checksums are generated based on cryptographic hashing algorithms and help you validate the integrity of data. Availability Availability is all about ensuring that authorized users can access required data when and where they need it. Availability is sometimes the forgotten little sibling of the principles mentioned in the two preceding sections, but it has a special place in the cloud given that easy access to data is often a major selling point for cloud services. If your letter gets lost in the mail, then availability is a clear issue — the message that was intended for you to read is no longer accessible for you to read. One of the most common attacks on availability is Distributed Denial of Service, or DDoS, which is a coordinated attack by multiple compromised machines causing disruption to a system’s availability. Aside from sophisticated cyber attacks, something as simple as accidentally deleting a file can compromise availability. Availability is a major consideration for cloud systems. Threats, vulnerabilities, and risks . . . oh my! They aren’t lions, tigers, or bears — but for many security professionals, threats, vulnerabilities, and risks are just as scary. Threats, vulnerabilities, and risks are interrelated terms describing things that may compromise the pillars of information security for a given system or an asset (the thing you’re protecting). The field of risk management deals with identifying threats and vulnerabilities, and quantifying and addressing the risk associated with them. Being able to recognize threats, vulnerabilities, and risks is a critical skill for information security professionals. It’s important that you’re able to identify the things that may cause your systems and data harm in order to better plan, design, and implement protections against them. Threats A threat is anything capable of intentionally or accidentally compromising an asset’s security. Some examples of common threats include A fire wipes out your datacenter A hacker gains access to your customer database An employee clicks a link in a phishing e-mail Though only a few examples, the preceding short list shows how threats can come in all shapes and sizes and how they can be natural or manmade, malicious or accidental. Vulnerabilities A vulnerability is a weakness or gap existing within a system; it’s something that, if not taken care of, may be exploited in order to compromise an asset’s confidentiality, integrity, or availability. Examples of vulnerabilities include Faulty fire suppression system Unpatched software Lack of security awareness training for employees Threats are pretty harmless without an associated vulnerability, and vice versa. A good fire detection and suppression system gives your data center a fighting chance, just like (you hope) thorough security awareness training for your organization’s employees will neutralize the threat of an employee clicking on a link in a phishing email. Risks Risk is the intersection of threat and vulnerability that defines the likelihood of a vulnerability being exploited (by a threat actor) and the impact should that exploit occur. In other words, risk is used to define the potential for damage or loss of an asset. Some examples of risks include A fire wipes out your data center, making service unavailable for five days A hacker steals half of your customer’s credit card numbers, causing significant reputational damage for your company An attacker gains root privilege through a phishing email and steals your agency’s Top Secret defense intelligence Risk = Threat x Vulnerability. This simple equation is the cornerstone of risk management.
View Article