Study Skills & Test Prep Articles
Testing, testing, 1-2-3. From college placement exams through advanced professional certifications, we've got easy-to-follow prep plans built to foster success. Take practice quizzes, get test-day tips, and bone up on whatever it is you need to know.
Articles From Study Skills & Test Prep
Filter Results
Cheat Sheet / Updated 05-10-2024
The Graduate Record Examinations (GRE) is your gateway to getting into the graduate school of your choice, maybe even with a scholarship, which then opens the doors to your career path. This Cheat Sheet is a collection of tips and key information that can help you score well on the GRE, get into graduate school, and further your career goals.
View Cheat SheetCheat Sheet / Updated 05-10-2024
The Graduate Record Examinations (GRE) is your gateway to getting into the graduate school of your choice, maybe even with a scholarship, which then opens the doors to your career path. This Cheat Sheet is a collection of tips and key information that can help you score well on the GRE, get into graduate school, and further your career goals.
View Cheat SheetCheat Sheet / Updated 05-07-2024
Making sure you have a handle on algebra, geometry, and trigonometry is a solid start for success on the math section of the ACT. However, to boost your confidence — and your score — even higher, you should master some helpful test-taking strategies, as well as make sure you know how to translate word problems into equations and use sketches to figure out what a tricky-sounding question is really asking.
View Cheat SheetCheat Sheet / Updated 04-12-2024
The certified public accountant (CPA) credential is a valuable tool for you to obtain. The designation will help you pursue a career in accounting and in other areas of business. To become a CPA, you must study for and pass the CPA exam. To increase your chances of success, you need to make sure you’ve prepared properly, including completing the necessary paperwork and requirements for your state, and study thoroughly for each of the four tests that make up the exam.
View Cheat SheetCheat Sheet / Updated 04-12-2024
Any professional military commander will tell you that knowing your enemy is the first step in winning a battle. After all, how can you expect to pass the Armed Services Vocational Aptitude Battery (ASVAB) if you don’t know what’s on the test? Here are some test-taking tips and key information about ASVAB test formats and subtests to help you score well, get into the service of your choice, and qualify for your dream job.
View Cheat SheetArticle / Updated 12-20-2023
When studying for the CCSP exam, you must consider how to implement data security technologies and design data security strategies that fit your business and security needs. The following technologies are commonly applied as part of a comprehensive data security strategy in the cloud: Encryption and key management Hashing Data loss prevention (DLP) Data de-identification (by masking and data obfuscation) Tokenization Encryption and key management As encryption pertains to cloud data security, encryption and key management are critical topics that must be fully understood in order to pass the CCSP exam. With resource pooling (and multitenancy) being a key characteristic of cloud computing, it’s important to remember that physical separation and protections are not commonly available in cloud environments. As such, strategic use of encryption is crucial to ensuring secure data storage and use in the cloud. When designing or implementing encryption technologies, remember that an encryption architecture has three basic components: The data being secured The encryption engine that performs all encryption operations The encryption keys used to secure the data While it would seem like encrypting everything would be the best way to ensure data security, it’s important to consider that encryption has a performance impact on systems; system resources are used in order to process encryption algorithms every time data is encrypted or decrypted, which can add up if encryption is used excessively. As a CCSP, it is up to you to implement encryption so that data is as secure as possible while minimizing the impact to system performance. Countless other challenges and considerations exist when implementing encryption technologies, both on-prem and in cloud environments. Some key cloud encryption challenges are Almost all data processing requires that data is in an unencrypted state. If a cloud customer is using a CSP for data analysis or processing, then encryption can be challenging to implement. Encryption keys are cached in memory when in use and often stay there for some time. This consideration is a major point of in multitenant environments because memory is a shared resource between tenants. CSPs must implement protections against tenants’ keys being accessed by tenants who share the same resources. Cloud data is often highly replicated (for availability purposes), which can make encryption and key managing challenging. Most CSPs have mechanisms in place to ensure that any copies of encrypted data remain encrypted. Throughout the entire data lifecycle, data can change states, locations, and format, which can require different applications of encryption along the way. Managing these changes may be a challenge, but understanding the Cloud Secure Data Lifecycle can help design complete end-to-end encryption solutions. Encryption is a confidentiality control at heart. It does not address threats to integrity of data on its own. Other technologies discussed throughout this chapter should be implemented to address integrity concerns. The effectiveness of an encryption solution is dependent upon how securely the encryption keys are stored and managed. As soon as an encryption key gets into the wrong hands, all data protected with that key is compromised. Keys that are managed by the CSP may potentially be accessed by malicious insiders, while customer-managed encryption keys are often mishandled or mismanaged. As the last point indicates, key management is a huge factor in ensuring that encryption implementations effectively secure cloud data. Because of its importance and the challenges associated with key management in the cloud, this task is typically one of the most complicated ones associated with securing cloud data. When developing your organization’s encryption and key management strategy, it’s important that you consider the following: Key generation: Encryption keys should be generated within a trusted, secure cryptographic module. FIPS 140-3 validated modules have been tested and certified to meet certain requirements that demonstrate tamper resistance and integrity of encryption keys. Key distribution: It’s important that encryption keys are distributed securely to prevent theft or compromise during transit. One best practice is to encrypt keys with a separate encryption key while distributing to other parties (in PKI applications, for example). The worst thing that could happen is sending out a bunch of “secret” keys that get stolen by malicious eavesdroppers! Key storage: Encryption keys must be protected at rest (both in volatile and persistent memory) and should never be stored in plaintext. Keys may be stored and managed internally on a virtual machine or other integrated application, externally and separate from the data itself, or managed by a trusted third party that provides key escrow services for secure key management. A Hardware Security Module (HSM) is a physical device that safeguards encryption keys. Many cloud providers provide HSM services, as well as software-based HSM capabilities. Key destruction or deletion: At the end of the encryption key’s lifecycle, there will be a time that the key is no longer needed. Key destruction is the removal of an encryption key from its operational location. Key deletion takes it a step further and also removes any information that could be used to reconstruct that key. To prevent a Denial of Service due to unavailable keys, deletion should only occur after an archival period that includes substantial analysis to ensure that the key is in fact no longer needed. Cloud environments rely heavily on encryption throughout the entire data lifecycle. While encryption itself is used for confidentiality, the widespread use of encryption means that availability of the encryption keys themselves is a major concern. Pay close attention to availability as you’re designing your key management systems and processes. Hashing Hashing, as depicted, is the process of taking an arbitrary piece of data and generating a unique string or number of fixed-length from it. Hashing can be applied to any type of data — documents, images, database files, virtual machines, and more. Hashing in a data structure provides a mechanism to ensure the integrity of data. Hashes are similar to human fingerprints, which can be used to uniquely identify a single person to whom that fingerprint belongs. As seen, even the slightest change to a large text file will noticeably change the output of the hashing algorithm. Hashing is incredibly useful when you want to be sure that what you’re looking at now is the same as what you created before. In cloud environments, hashing helps verify that virtual machine instances haven’t been modified (maliciously or accidentally) without your knowledge. Simply hash your VM image before running it and compare it to the hash of the known-good VM image; the hash outputs should be identical. The term hashing is sometimes used interchangeably with encryption, but they are very different! Encryption is a two-way function, meaning what can be encrypted can be decrypted. Conversely, hashing is a one-way function. You can only generate a hash of an object; you cannot retrieve an object from its hash. Encryption, again, is used to provide confidentiality, while hashing provides integrity checking. Be careful not to confuse these two terms! Several hashing algorithms are available, but the SHA (Secure Hash Algorithm) family of algorithms are amongst the most popular. Specific algorithms are outside the scope of this book, but you can research SHA-1, SHA-2, and SHA-3 for additional context. Data Loss Prevention (DLP) Data loss prevention (DLP), also known as data leakage prevention, is the set of technologies and practices used to identify and classify sensitive data, while ensuring that sensitive data is not lost or accessed by unauthorized parties. DLP can be applied to help restrict the flow of both structured and unstructured data to authorized locations and users. Effective use of DLP goes a long way to helping organizations safeguard their data’s confidentiality, both on-prem and in the cloud. To put it plainly, DLP analyzes data storage, identifies sensitive data components, and prevents users from accidentally or maliciously sending that sensitive data to the wrong party. When designing a DLP strategy, organizations must consider how the technology fits in with their existing technologies, processes, and architecture. DLP controls need to be thoroughly understood and applied in a manner that aligns with the organization’s overall enterprise architecture in order to ensure that only the right type of data is blocked from being transmitted. Hybrid cloud users, or users that utilize a combination of cloud-based and on-prem services, should pay extremely close attention to their enterprise security architecture while developing a DLP strategy. Because data traverses both cloud and noncloud environments, a poor DLP implementation can result in segmented data security policies that are hard to manage and ineffective. DLP that is incorrectly implemented can lead to false-positives (for example, blocking legitimate traffic) or false-negatives (allowing sensitive data to be sent to unauthorized parties). DLP implementations consist of three core components or stages: Discovery and classification: The first stage of DLP is discovery and classification. Discovery is the process of finding all instances of data, and classification is the act of categorizing that data based on its sensitivity and other characteristics. Examples of classifications may include “credit card data,” “Social Security numbers,” “health records,” and so on. Comprehensive discovery and proper classification is crucial to success during the remaining DLP stages. Monitoring: After data has been fully discovered and classified, it is able to be monitored. Monitoring is an essential component of the DLP implementation and involves watching data as it moves throughout the cloud data lifecycle. The monitoring stage is where the DLP implementation is looking to identify data that is being misused or handled outside of established usage policies. Effective DLP monitoring should happen on storage devices, networking devices, servers, workstations, and other endpoints — and it should evaluate traffic across all potential export routes (email, Internet browsers, and so on). Enforcement: The final DLP stage, enforcement, is where action is taken on policy violations identified during the monitoring stage. These actions are configured based on the classification of data and the potential impact of its loss. Violations of less sensitive data is traditionally logged and/or alerted on, while more sensitive data can actually be blocked from unauthorized exposure or loss. A common use-case here is financial services companies that detect credit card numbers being emailed to unauthorized domains and are able to stop the email in its tracks, before it ever leaves the corporate network. Always remember “Security follows the data” — and DLP technology is no different. When creating a DLP implementation strategy, it’s important that you consider techniques for monitoring activity in every data state. DLP data states are DLP at rest: For data at rest, the DLP implementation is stored wherever the data is stored, such as a workstation, file server, or some other form of storage system. Although this DLP implementation is often the simplest, it may need to work in conjunction with other DLP implementations to be most effective. DLP in transit: Network-based DLP is data loss prevention that involves monitoring outbound traffic near the network perimeter. This DLP implementation monitors traffic over Hypertext Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure (HTTPS), File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP), and other protocols. If the network traffic being monitored is encrypted, you will need to integrate encryption and key management technologies into your DLP solution. Standard DLP implementations cannot effectively monitor encrypted traffic, such as HTTPS. DLP in use: Host-based, or endpoint-based, DLP is data loss prevention that involves installation of a DLP application on a workstation or other endpoint device. This DLP implementation allows monitoring of all data in use on the client device and provides insights that network-based DLP are not able to provide. Because of the massive scale of many cloud environments, host-based DLP can be a major challenge. There are simply too many hosts and endpoints to monitor without a sophisticated strategy that involves automated deployment. Despite this challenge, host-based DLP is not impossible in the cloud, and CSPs continue to make monitoring easier as new cloud-native DLP features become available. After you understand DLP and how it can be used to protect cloud data, there are a few considerations that cloud security professionals commonly face when implementing cloud-based DLP: Cloud data is highly distributed and replicated across locations. Data can move between servers, from one data center to another, to and from backup storage, or between a customer and the cloud provider. This movement, along with the data replication that ensures availability, present challenges that need to be worked through in a DLP strategy. DLP technologies can impact performance. Host-based DLP scan all data access activities on an endpoint, and network-based DLP scan all outbound network traffic across a network boundary. This constant monitoring and scanning can impact system and network performance and must be considered while developing and testing your DLP strategy. Cloud-based DLP can get expensive. The pay-for-what-you-use model is often a great savings to cloud customers, but when it comes to DLP, the constant resource utilization associated with monitoring traffic can quickly add up. It’s important to model and plan for resource consumption costs on top of the costs of the DLP solution itself. Data de-identification Confidentiality is incredibly important, especially in the cloud. While mechanisms like encryption and DLP go a long way to providing data confidentiality, they’re not always feasible. Data de-identification (or anonymization) is the process of removing information that can be used to identify a specific individual from a dataset. This technique is commonly used as a privacy measure to protect Personally Identifiable Information (PII) or other sensitive information from being exposed when an entire dataset is shared. The following figure depicts the purest form of data de-identification; in this example, student names have been removed in order to protect the confidentiality of their grades. Several techniques are available to de-identify sensitive information; masking (or obfuscation) and tokenization are two of the most commonly used methods. Masking Masking is the process of partially or completely replacing sensitive data with random characters or other nonsensitive data. Masking, or obfuscation, can happen in a number of ways, but the following figure is a visual depiction of the most popular type of data masking, which is commonly used to protect credit card numbers and other sensitive financial information. As a cloud security professional, you can use several techniques when masking or obfuscating data. Here are a few to remember: Substitution: Substitution mimics the look of real data, but replaces (or appends) it with some unrelated value. Substitution can either be random or algorithmic, with the latter allowing two-way substitution — meaning if you have the algorithm, then you can retrieve the original data from the masked dataset. Scrambling: Scrambling mimics the look of real data, but simply jumbles the characters into a random order. For example, a customer’s whose account number is #5551234 may be shown as #1552435 in a development environment. (For what it’s worth, my scrambled phone number is 0926381135.) Deletion or nulling: This technique is just what it sounds like. When using this masking technique, data appears blank or empty to anyone who isn’t authorized to view it. Aside from being used to comply with regulatory regulations (like HIPAA or PCI DSS), data masking is often used when organizations need to use production data in a test or development environment. By masking the data, development environments are able to use real data without exposing sensitive data elements to unauthorized viewers or less secure environments. Tokenization Tokenization is the process of substituting a sensitive piece of data with a nonsensitive replacement, called a token. The token is merely a reference back to the sensitive data, but has no meaning or sensitivity on its own. The token maintains the look and feel of the original data and is mapped back to the original data by the tokenization engine or application. Tokenization allows code to continue to run seamlessly, even with randomized tokens in place of sensitive data. Tokenization can be outsourced to external, cloud-based tokenization services (referred to as tokenization-as-a-service). When using these services, it’s prudent to understand how the provider secures your data both at rest and in transit between you and their systems.
View ArticleArticle / Updated 12-20-2023
Domain 3, which includes cloud platform and infrastructure security, represents 17 percent of the CCSP certification exam. Virtualization is the process of creating software instances of actual hardware. VMs, for example, are software instances of actual computers. Software-Defined Networks are virtualized networks. Nowadays, you can pretty much find any traditional hardware available as a virtualized solution. Virtualization is the secret sauce behind cloud computing, as it allows a single piece of hardware to be shared by multiple customers. Concepts like multitenancy and resource pooling would not exist as they do today — and you wouldn’t be reading this book — if it weren’t for the advent of virtualization! Virtualization offers many clear benefits. Following is a list of some of the most noteworthy: Increases scalability: Virtualized environments are designed to grow as your demand grows. Instead of buying new hardware, you simply spin up additional virtual instances. Allows faster resource provisioning: It’s much quicker and easier to spin up virtualized hardware from a console than it is to physically boot-up multiple pieces of hardware. Reduces downtime: Restoring or redeploying physical hardware takes a lot of time, especially at scale. Failover for virtualized resources can happen much more quickly, which means your systems remain up and running longer. Avoids vendor lock-in: Virtualization abstracts software from hardware, meaning your virtualized resources are more portable than their physical counterparts. Unhappy with your vendor? Pack up your VMs and move to another one! Saves time (and money): Virtualized resources can be easily centrally managed, reducing the need for personnel and equipment to maintain your infrastructure. In addition, less hardware usually means less money. The preceding list reiterates why virtualization is such a critical technology and reminds you of the deep connection between virtualization and cloud computing. The most common implementation of virtualization is the hypervisor. A hypervisor is a computing layer that allows multiple guest Operating Systems to run on a single physical host device. The following figure shows an overview of hypervisor architecture. The hypervisor abstracts software from hardware and allows each of the guest OSes to share the host’s hardware resources, while giving the guests the impression that they’re all alone on that host. The two categories of hypervisors are Type 1 and Type 2. A Type 1 hypervisor is also known as a bare metal hypervisor, as it runs directly on the hardware. Type 2 hypervisors, however, run on the host’s Operating System. This figure shows a comparison of the two. Despite all the advantages of virtualization, and hypervisors specifically, you, as a CCSP candidate, should remember some challenges: Hypervisor security: The hypervisor is an additional piece of software, hardware, or firmware that sits between the host and each guest. As a result, it expands the attack surface and comes with its own set of vulnerabilities that the good guys must discover and patch before the bad guys get to them. If not fixed, hypervisor flaws can lead to external attacks on VMs or even VM-to-VM attacks, where one cloud tenant can access or compromise another tenant’s data. VM security: Virtual machines are nothing more than files that sit on a disk or other storage mechanism. Imagine your entire home computer wrapped up into a single icon that sits on your desktop — that’s pretty much what a virtual machine comes down to. If not sufficiently protected, a VM image is susceptible to compromise while dormant or offline. Use controls like Access Control Lists (ACLs), encryption, and hashing to protect the confidentiality and integrity of your VM files. Network security: Network traffic within virtualized environments cannot be monitored and protected by physical security controls, such as network-based intrusion detection systems. You must select appropriate tools to monitor inter- and intra-VM network traffic. The concept of virtual machine introspection (VMI) allows a hypervisor to monitor its guest Operating Systems during runtime. Not all hypervisors are capable of VMI, but it’s a technique that can prove invaluable for securing VMs during operation. Resource utilization: If not properly configured, a single VM can exhaust a host’s resources, leaving other VMs out of luck. Resource utilization is where the concept of limits (discussed in the “Reservations, limits, and shares” section of this chapter) comes in handy. It’s essential that you manage VMs as if they share a pool of resources — because they do!
View ArticleArticle / Updated 12-20-2023
The Secure Software Development Lifecycle process is covered under Domain 4, which represents 17 percent of the CCSP certification exam. Streamlined and secure application development requires a consistent methodology and a well-defined process of getting from concept to finished product. SDLC is the series of steps that is followed to build, modify, and maintain computing software. Business requirements Your organization’s business requirements should be a key consideration whenever you develop new software or even when you modify existing applications. You should make sure that you have a firm understanding of your organization’s goals (overall and specific to your project) and knowledge of the end-user’s needs and expectations. It’s important to gather input from as many stakeholders as possible as early as possible to support the success of your application. Gathering requirements from relevant leaders and business units across your organization is crucial to ensuring that you don’t waste development cycles on applications or features that don’t meet the needs of your business. These business requirements are a critical input into the SDLC. SDLC phases While the SDLC process has multiple different variations, it most commonly includes the steps, or phases: Planning Defining Designing Developing Testing Deploying and maintaining There’s a good chance that you will see at least one question related to the SDLC on your exam. Remember that the titles of each phase may vary slightly from one methodology to the next, but make sure that you have a strong understanding of the overall flow and the order of operations. Although none of the stages specifically reference security, it is important that you consider security at each and every step of the SDLC process. Waiting until later stages of the process can introduce unnecessary security risks, which can add unforeseen costs and extend your project timeline. SDLC Planning phase The Planning phase is the most fundamental stage of the SDLC and is sometimes called Requirements Gathering. During this initial phase, the project scope is established and high-level requirements are gathered to support the remaining lifecycle phases. The project team should work with senior leadership and all project stakeholders to create the overall project timeline and identify project costs and resources required. During the Planning phase, you must consider all requirements and desired features and conduct a cost-benefit analysis to determine the potential financial impact versus the proposed value to the end-user. Using all the information that you gather during this phase, you should then validate the economic and technical feasibility of proceeding with the project. The Planning phase is where risks should initially be identified. Your project team should consider what may go wrong and how you can mitigate, or lower, the impact of those risks. For example, imagine that you’re building an online banking application. As part of the Planning phase, you should not only consider all functional requirements of such an application, but also security and compliance requirements, such as satisfying PCI DSS. Consider what risks currently exist within your organization (or your cloud environment) that might get in the way of demonstrating PCI DSS and then plan ways to address those risks. SDLC Defining phase You may also see this phase referred to as Requirements Analysis. During the Defining phase, you use all the business requirements, feasibility studies, and stakeholder input from the Planning phase to document clearly defined product requirements. Your product requirements should provide full details of the specific features and functionality of your proposed application. These requirements will ultimately feed your design decisions, so it needs to be as thorough as possible. In addition, during this phase you must define the specific hardware and software requirements required for your development team — identify what type of dev environment is needed, designate your programming language, and define all technical resources needed to complete the project. This phase is where you should specifically define all your application security requirements and identify the tools and resources necessary to develop those accordingly. You should be thinking about where encryption is required, what type of access control features are needed, and what requirements you have for maintaining your code’s integrity. SDLC Designing phase The Designing phase is where you take your product requirements and software specifications and turn them into an actual design plan, often called a design specification document. This design plan is then used during the next phase to guide the actual development and implementation of your application. During the Designing phase, your developers, systems architects, and other technical staff create the high-level system and software design to meet each identified requirement. Your mission during this phase is to design the overall software architecture and create a plan that identifies the technical details of your application’s design. In cloud development, this phase includes defining the required amount of CPU cores, RAM, and bandwidth, while also identifying which cloud services are required for full functionality of your application. This component is critical because it may identify a need for your organization to provision additional cloud resources. Your design should define all software components that need to be created, interconnections with third-party systems, the front-end user interface, and all data flows (both within the application and between users and the application). At this stage of the SDLC, you should also conduct threat modeling exercises and integrate your risk mitigation decisions (from the Planning phase) into your formal designs. In other words, you want to fully identify potential risks. SDLC Developing phase Software developers, rejoice! After weeks or even months of project planning, you can finally write some code! During this phase of the SDLC, your development team breaks up the work documented in previous steps into pieces (or modules) that are coded individually. Database developers create the required data storage architecture, front-end developers create the interface that users will interact with, and back-end developers code all the behind-the-scenes inner-workings of the application. This phase is typically the longest of the SDLC, but if the previous steps are followed carefully, it can be the least complicated part of the whole process. During this phase, developers should conduct peer reviews of each other’s code to check for flaws, and each individual module should be unit tested to verify its functionality prior to being rolled into the larger project. Some development teams skip this part and struggle mightily to debug flaws once an application is completed. In addition to conducting functional testing of each module, the time is right to begin security testing. Your organization should conduct static code analysis and security scanning of each module before integration into the project. Failure to do so may allow individual software vulnerabilities to get lost in the overall codebase, and multiple individual security flaws may combine to present a larger aggregate risk, or combined risk. SDLA Testing phase Once the code is fully developed, the application enters the Testing phase. During this phase, application testers seek to verify whether the application functions as desired and according to the documented requirements; the ultimate goal here is to uncover all flaws within the application and report those flaws to the developers for patching. This cyclical process continues until all product requirements have been validated and all flaws have been fixed. As a completed application, security testers have more tools at their disposal to uncover security flaws. Instead of relying solely on static code analysis, testers can use dynamic analysis to identify flaws that occur only when the code is executed. The Testing phase is one of the most crucial phases of the SDLC, as it is the main gate between your development team and customers. Testing should be conducted in accordance with an application testing plan that identifies what and how to test. Management and relevant stakeholders should carefully review and approve your testing plan before testing begins. Deploying and maintaining Once the application has passed the Testing phase, it is ready to be deployed for customer use. There are often multiple stages of deployment (Alpha, Beta, and General Availability are common ones), each with its own breadth of deployment (for example, alpha releases tend to be deployed to select customers, whereas general availability means it’s ready for everyone). Once applications have been tested and successfully deployed, they enter a maintenance phase where they’re continually monitored and updated. During the Maintaining phase, the production software undergoes an ongoing cycle of the SDLC process, where security patches and other updates go through the same planning, defining, designing, developing, testing, and deploying activities discussed in the preceding sections. Many SDLC models include a separate phase for disposal or termination, which happens when an application is no longer needed or supported. From a security perspective, you should keep in mind that data (including portions of applications) may remain in cloud environments even after deletion. Consult your contracts and SLAs for commitments that your CSP makes for data deletion. Methodologies Although the steps within the SDLC remain largely constant, several SDLC methodologies, or models, exist, and each approaches these steps in slightly different ways. Two of the most commonly referenced and used methodologies are waterfall and agile. Waterfall Waterfall is the oldest and most straightforward SDLC methodology. In this model, you complete one phase and then continue on to the next — you move in sequential order, flowing through every step of the cycle from beginning to end. Each phase of this model relies on successful completion of the previous phase; there’s no going back, because... well, because waterfalls don’t flow up. Some advantages of the waterfall methodology include It’s simple to manage and easy to follow. Tracking and measuring progress is easy because you have a clearly defined end state early on. The measure twice, cut once approach allows applications to be developed based upon a more complete understanding of all requirements and deliverables from the start. The process can largely occur without customer intervention after requirements are initially gathered. Customers and developers agree on desired outcomes early in the project. Some challenges that come with waterfall include It’s rigid. Requirements must be fully developed early in the process and are difficult to change once the design has been completed. Products may take longer to deliver compared to more iterative models, like agile. It relies very little on the customer or end-user, which may make some customers feel left out. Testing is delayed until late in the process, which may allow small issues to build up into larger ones before they’re detected. Agile Agile is more of the new kid on the block, having been introduced in the 1990s. In this model, instead of proceeding in a linear and sequential fashion, development and testing activities occur simultaneously and cyclically. Application development is separated into sprints that produce a succession of releases that each improves upon the previous release. With the agile model, the goal is to move quickly and to fail fast — create your first release, test it, fix it, and create your next release fast! Some advantages of the agile methodology include It’s flexible. You can move from one phase to the next without worrying that the previous phase isn’t perfect or complete. Time to market is much quicker than waterfall. It’s very user-focused; the customer has frequent opportunities to give feedback on the application. Risks may be reduced because the iterative nature of agile allows you get feedback and conduct testing early and often. Some challenges that come with Agile include It can be challenging to apply in real-life projects, especially larger projects with many stakeholders and components. The product end-state is less predictable than waterfall. With agile, you iterate until you’re happy with the result. It requires a very high level of collaboration and frequent communication between developers, customers, and other stakeholders. This challenge can be a pro, but sometimes has a negative impact on developers and project timelines.
View ArticleCheat Sheet / Updated 12-20-2023
The Certified Cloud Security Professional (CCSP) credential is based upon a Common Body of Knowledge (CBK) jointly developed by the International Information Systems Security Certification Consortium (ISC)2 and the Cloud Security Alliance (CSA). The CBK (and the associated exam) includes six domains that cover separate, but interrelated, areas: Cloud Concepts, Architecture and Design; Cloud Data Security; Cloud Platform & Infrastructure Security; Cloud Application Security; Cloud Security Operations; and Legal, Risk and Compliance. A ton of information is in these domains, and you can use this Cheat Sheet to remember some of the most important parts.
View Cheat SheetCheat Sheet / Updated 11-01-2023
If you're thinking about joining the U.S. military, your Armed Forces Qualification Test (AFQT) score may well be the most important score you achieve on any military test. You need a qualifying score on the AFQT, or your plans for enlistment come to a dead end — and each branch of the military has its own minimum AFQT score requirements. Part of getting a high score on the AFQT involves brushing up on your math skills. You need to memorize key formulas and use proven test-taking strategies to maximize your chances for a high math score. The other part is making sure you have a firm grasp on English; in order to ace the language parts of the AFQT, you need a solid vocabulary and good reading comprehension skills.
View Cheat Sheet