Articles From Lawrence C. Miller
Filter Results
Cheat Sheet / Updated 07-27-2024
The Certified Information Systems Security Professional (CISSP) certification is based upon a Common Body of Knowledge (CBK) determined by the International Information Systems Security Certification Consortium, Inc. (ISC2). It is defined through eight tested domains: Security and Risk Management; Asset Security; Security Engineering; Communication and Network Security; Identity and Access Management; Security Assessment and Testing; Security Operations; and Software Development Security. Put the following CISSP test prep tips to good use and prove that you have mastered these domains.
View Cheat SheetArticle / Updated 04-14-2023
On the CISSP exam, you need to be able to recognize the techniques used to identify and fix vulnerabilities in systems and the techniques for security assessments and testing for the various types of systems. Client-based systems The types of design vulnerabilities often found on endpoints involve defects in client-side code that is present in browsers and applications. The defects most often found include these: Sensitive data left behind in the file system. Generally, this consists of temporary files and cache files, which may be accessible by other users and processes on the system. Unprotected local data. Local data stores may have loose permissions and lack encryption. Vulnerable applets. Many browsers and other client applications often employ applets for viewing documents and video files. Often, the applets themselves may have exploitable weaknesses. Unprotected or weakly protected communications. Data transmitted between the client and other systems may use weak encryption or use no encryption at all. Weak or nonexistent authentication. Authentication methods on the client, or between the client and server systems, may be unnecessarily weak. This permits an adversary to access the application, local data, or server data without first authenticating. Other weaknesses may be present in client systems. For a more complete understanding of application weaknesses, consult OWASP. Identifying weaknesses like the preceding examples will require one or more of the following techniques: Operating system examination Network sniffing Code review Manual testing and observation Server-based systems Design vulnerabilities found on servers fall into the following categories: Sensitive data left behind in the file system. Generally, this consists of temporary files and cache files, which may be accessible by other users and processes on the system. Unprotected local data. Local data stores may have loose permissions and also lack encryption. Unprotected or weakly protected communications. Data transmitted between the server and other systems (including clients) may use weak encryption or use no encryption at all. Weak or nonexistent authentication. Authentication methods on the server may be unnecessarily weak. This permits an adversary to access the application, local data, or server data without first authenticating. These defects are similar to those in the client-based systems. This is because the terms client and server have only to do with perspective: in both cases, software is running on a system. Database systems Database management systems are nearly as complex as the operating systems on which they reside. Vulnerabilities in database management systems include these: Loose access permissions. Like applications and operating systems, database management systems have schemes of access controls that are often designed far too loosely, which permits more access to critical and sensitive information than is appropriate. Another aspect of loose access permissions is an excessive number of persons with privileged access. Finally, there can be failures to implement cryptography as an access control when appropriate. Excessive retention of sensitive data. Keeping sensitive data longer than necessary increases the impact of a security breach. Aggregation of personally identifiable information. The practice known as aggregation of data about citizens is a potentially risky undertaking that can result in an organization possessing sensitive personal information. Sometimes, this happens when an organization deposits historic data from various sources into a data warehouse, where this disparate sensitive data is brought together for the first time. The result is a gold mine or a time bomb, depending on how you look at it. Database security defects can be identified through manual examination or automated tools. Mitigation may be as easy as changing access permissions or as complex as redesigning the database schema and related application software programs. Large-scale parallel data systems Large-scale parallel data systems are systems with large numbers of processors. The processors may either reside in one physical location or be geographically distributed. Vulnerabilities in these systems include Loose access permissions. Management interfaces or the processing systems themselves may have either default, easily guessed, or shared logon credentials that would permit an intruder to easily attack the system. Unprotected or weakly protected communications. Data transmitted between systems may be using either weak encryption or no encryption at all. This could enable an attacker to obtain sensitive data in transit or enough knowledge to compromise the system. Security defects in parallel systems can be identified through manual examination and mitigated through either configuration changes or system design changes. Distributed systems Distributed systems are simply systems with components scattered throughout physical and logical space. Oftentimes, these components are owned and/or managed by different groups or organizations, sometimes in different countries. Some components may be privately used while others represent services available to the public (for example, Google Maps). Vulnerabilities in distributed systems include these: Loose access permissions. Individual components in a distributed system may have individual, separate access control systems, or there may be one overarching access control system for all of the distributed system’s components. Either way, there are too many opportunities for access permissions to be too loose, thereby enabling some subjects access to more data and functions than they need. Unprotected or weakly protected communications. Data transmitted between the server and other systems (including clients) may be using either weak encryption or no encryption at all. Weak security inheritance. What we mean here is that in a distributed system, one component having weak security may compromise the security of the entire system. For example, a publicly accessible component may have direct open access to other components, bypassing local controls in those other components. Lack of centralized security and control. A distributed system that is controlled by more than one organization often lacks overall oversight for security management and security operations. This is especially true of peer-to-peer systems that are often run by end users on lightly managed or unmanaged endpoints. Critical paths. A critical path weakness is one where a system’s continued operation depends on the availability of a single component. All of these weaknesses can also be present in simpler environments. These weaknesses and other defects can be detected through either the use of security scanning tools or manual techniques, and corrective actions taken to mitigate those defects. High quality standards for cloud computing — for cloud service providers as well as organizations using cloud services — can be found at the Cloud Security Alliance and the European Network and Information Security Agency. Cryptographic systems Cryptographic systems are especially apt to contain vulnerabilities, for the simple reason that people focus on the cryptographic algorithm but fail to implement it properly. Like any powerful tool, if the operator doesn’t know how to use it, it can be useless at best and dangerous at its worst. The ways in which a cryptographic system may be vulnerable include these: Use of outdated algorithm. Developers and engineers must be careful to select encryption algorithms that are robust. Furthermore, algorithms in use should be reviewed at least once per year to ensure they continue to be sufficient. Use of untested algorithm. Engineers sometimes make the mistake of either home-brewing their own cryptographic system or using one that is clearly insufficient. It’s best to use one of many publicly available cryptosystems that have stood the test of repeated scrutiny. Failure to encrypt encryption keys. A proper cryptosystem sometimes requires that encryption keys themselves be encrypted. Weak cryptographic keys. Choosing a great algorithm is all but undone if the initialization vector is too small, or too-short keys or too-simple keys are used. Insufficient protection of cryptographic keys. A cryptographic system is only as strong as the protection of its encryption keys. If too many people have access to keys, or if the keys are not sufficiently protected, an intruder may be able to compromise the system simply by stealing and using the keys. Separate encryption keys should be used for the data encryption key (DEK) used to encrypt/decrypt data and the key encryption key (KEK) used to encrypt/decrypt the DEK. These and other vulnerabilities in cryptographic systems can be detected and mitigated through peer reviews of cryptosystems, assessments by qualified external parties, and the application of corrective actions to fix defects. Industrial control systems Industrial control systems (ICS) represent a wide variety of means for monitoring and controlling machinery of various kinds, including power generation, distribution, and consumption; natural gas and petroleum pipelines; municipal water, irrigation, and waste systems; traffic signals; manufacturing; and package distribution. Weaknesses in industrial control systems include the following: Loose access permissions. Access to monitoring or controls of ICS’s are often set too loosely, thereby enabling some users or systems access to more data and control than they need. Failure to change default access credentials. All too often, organizations implement ICS components and fail to change the default administrative credentials on those components. This makes it far too easy for intruders to take over the ICS. Access from personally owned devices. In the name of convenience, some organizations permit personnel to control machinery from personally owned smartphones and tablets. This vastly increases the ICS’s attack surface and provides opportunities for intruders to access and control critical machinery. Lack of malware control. Many ICS’s lack security components that detect and block malware and other malicious activity, resulting in intruders having too easy a time getting into the ICS. Failure to air gap the ICS. Many organizations fail to air gap (isolate) the ICS from the rest of its corporate network, thereby enabling excessive opportunities for malware and intruders to access the ICS via a corporate network where users invite malware through phishing and other means. Failure to update ICS components. While the manufacturers of ICS components are notorious for failing to issue security patches, organizations are equally culpable in their failure to install these patches when they do arrive. These vulnerabilities can be mitigated through a systematic process of establishing good controls, testing control effectiveness, and applying corrective action when controls are found to be ineffective. Cloud-based systems The U.S. National Institute of Standards and Technology (NIST) defines three cloud computing service models as follows: Software as a Service (SaaS): Customers are provided access to an application running on a cloud infrastructure. The application is accessible from various client devices and interfaces, but the customer has no knowledge of, and does not manage or control, the underlying cloud infrastructure. The customer may have access to limited user-specific application settings. Platform as a Service (PaaS): Customers can deploy supported applications onto the provider’s cloud infrastructure, but the customer has no knowledge of, and does not manage or control, the underlying cloud infrastructure. The customer has control over the deployed applications and limited configuration settings for the application-hosting environment. Infrastructure as a Service (IaaS): Customers can provision processing, storage, networks, and other computing resources and deploy and run operating systems and applications, but the customer has no knowledge of, and does not manage or control, the underlying cloud infrastructure. The customer has control over operating systems, storage, and deployed applications, as well as some networking components (for example, host firewalls). NIST further defines four cloud computing deployment models as follows: Public: A cloud infrastructure that is open to use by the general public. It’s owned, managed, and operated by a third party (or parties) and exists on the cloud provider’s premises. Community: A cloud infrastructure that is used exclusively by a specific group of organizations. Private: A cloud infrastructure that is used exclusively by a single organization. It may be owned, managed, and operated by the organization or a third party (or a combination of both), and may exist on or off premises. Hybrid: A cloud infrastructure that is composed of two or more of the aforementioned deployment models, bound together by standardized or proprietary technology that enables data and application portability (for example, failover to a secondary data center for disaster recovery or content delivery networks across multiple clouds). Major public cloud service providers such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Oracle Cloud Platform provide customers not only with virtually unlimited compute and storage at scale, but also a depth and breadth of security capabilities that often exceeds the capabilities of the customers themselves. However, this does not mean that cloud-based systems are inherently secure. The shared responsibility model is used by public cloud service providers to clearly define which aspects of security the provider is responsible for, and which aspects the customer is responsible for. SaaS models place the most responsibility on the cloud service provider, typically including securing the following: Applications and data Runtime and middleware Servers, virtualization, and operating systems Storage and networking Physical data center However, the customer is always ultimately responsible for the security and privacy of its data. Additionally, identity and access management (IAM) is typically the customer’s responsibility. In a PaaS model, the customer is typically responsible for the security of its applications and data, as well as IAM, among others. In an IaaS model, the customer is typically responsible for the security of its applications and data, runtime and middleware, and operating systems. The cloud service provider is typically responsible for the security of networking and the data center (although cloud service providers generally do not provide firewalls). Virtualization, server, and storage security may be managed by either the cloud service provider or customer. The Cloud Security Alliance (CSA) publishes the Cloud Controls Matrix, which provides a framework for information security that is specifically designed for the cloud industry. Internet of Things The security of Internet of Things (IoT) devices and systems is a rapidly evolving area of information security. IoT sensors and devices collect large amounts of both potentially sensitive data and seemingly innocuous data. However, under certain circumstances practically any data that is collected can be used for nefarious purposes, security must be a critical design consideration for IoT devices and systems. This includes not only securing the data stored on the systems, but also how the data is collected, transmitted, processed, and used. There are many networking and communications protocols commonly used in IoT devices, including the following: IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) 5G Wi-Fi Bluetooth Mesh and Bluetooth Low-Energy (BLE) Thread Zigbee, and many others The security of these various protocols and their implementations must also be carefully considered in the design of secure IoT devices and systems.
View ArticleArticle / Updated 09-27-2022
The International Information System Security Certification Consortium (ISC)2 has several other certifications, including some that you may aspire to earn after (or instead of) receiving your Certified Information Systems Security Professional (CISSP) credential. These certifications are CCFP® (Certified Cyber Forensics Professional): This is a certification for forensics and security incident responders. CCSPsm (Certified Cloud Security Professional ): This certification on cloud controls and security practices was co-developed by (ISC)2 and the Cloud Security Alliance. CSSLP® (Certified Secure Software Lifecycle Professional ): Designed for software development professionals, the CSSLP recognizes software development in which security is a part of the software requirements, design, and testing — so that the finished product has security designed in and built in, rather than added on afterward. HCISPP® (HealthCare Information Security and Privacy Practitioner): Designed for information security in the healthcare industry, the HCISPP recognizes knowledge and experience related to healthcare data protection regulations and the protection of patient data. JGISP (Japanese Government Information Security Professional): A country-specific certification that validates a professional's knowledge, skills, and experience related to Japanese government regulations and standards. CAP® (Certification and Accreditation Professional): Jointly developed by the U.S. Department of State's Office of Information Assurance and (ISC)2, the CAP credential reflects the skills required to assess risk and establish security requirements for complex systems and environments.
View ArticleArticle / Updated 09-19-2022
Web-based systems contain many components, including application code, database management systems, operating systems, middleware, and the web server software itself. These components may, individually and collectively, have security design or implementation defects. Some of the defects present include these: Failure to block injection attacks. Attacks such as JavaScript injection and SQL injection can permit an attacker to cause a web application to malfunction and expose sensitive internally stored data. Defective authentication. There are many, many ways in which a web site can implement authentication — they are too numerous to list here. Authentication is essential to get right; many sites fail to do so. Defective session management. Web servers create logical “sessions” to keep track of individual users. Many web sites’ session management mechanisms are vulnerable to abuse, most notably that permit an attacker to take over another user’s session. Failure to block cross-site scripting attacks. Web sites that fail to examine and sanitize input data. As a result, attackers can sometimes create attacks that send malicious content to the user. Failure to block cross-site request forgery attacks. Web sites that fail to employ proper session and session context management can be vulnerable to attacks in which users are tricked into sending commands to web sites that may cause them harm. Failure to protect direct objects references. Web sites can sometimes be tricked into accessing and sending data to a user who is not authorized to view or modify it. These vulnerabilities can be mitigated in three main ways: Developer training on the techniques of safer software development. Including security in the development lifecycle. Use of dynamic and static application scanning tools. For a more in-depth review of vulnerabilities in web-based systems, read the “Top 10” list at OWASP.
View ArticleArticle / Updated 08-06-2020
Email has emerged as one of the most important communication mediums in our global economy, with over 50 billion email messages sent worldwide every day. Unfortunately, spam accounts for as much as 85 percent of that email volume. Spam is more than a minor nuisance — it's a serious security threat to all organizations worldwide. The Simple Mail Transfer Protocol (SMTP) is used to send and receive email across the Internet. It operates on TCP/UDP port 25 and contains many well-known vulnerabilities. Most SMTP mail servers are configured by default to forward (or relay) all mail, regardless of whether the sender's or recipient's address is valid. Failing to secure your organization's mail servers may allow spammers to misuse your servers and bandwidth as an open relay to propagate their spam. The bad news is that you'll eventually (it usually doesn't take more than a few days) get blocklisted by a large number of organizations that maintain real-time blackhole lists (RBLs) against open relays, effectively preventing most (if not all) email communications from your organization reaching their intended recipients. It usually takes several months to get removed from those RBLs after you've been blocklisted, and it does significant damage to your organization's communications infrastructure and credibility. Using RBLs is only one method to combat spam, and it's generally not even the most effective or reliable method, at that. The organizations that maintain these massive lists aren't perfect and do make mistakes. If a mistake is made with your domain or IP addresses, you'll curse their existence — it's a case in which the cure is sometimes worse than the disease. Failure to make a reasonable effort towards spam prevention in your organization is a failure of due diligence. An organization that fails to implement appropriate countermeasures may find itself a defendant in a sexual harassment lawsuit from an employee inundated with pornographic emails sent by a spammer to his or her corporate email address. Other risks associated with spam email include Missing or deleting important emails: Your boss might inadvertently delete that email authorizing your promotion and pay raise because her inbox is flooded with spam and she gets trigger-happy with the Delete button — at least it's a convenient excuse! Viruses and other mail-icious code: Although you seem to hear less about viruses in recent years, they're still prevalent, and email remains the favored medium for propagating them. Phishing and pharming scams: Phishing and pharming attacks, in which victims are lured to an apparently legitimate website (typically online banking or auctions) ostensibly to validate their personal account information, are usually perpetrated through mass mailings. It's a complex scam increasingly perpetrated by organized criminals. Ultimately, phishing and pharming scams cost the victim his or her moolah — and possibly his or her identity. Countering these threats requires an arsenal of technical solutions and user-awareness efforts and is — at least, for now — a never-ending battle. Begin by securing your servers and client PCs. Mail servers should always be placed in a DMZ, and unnecessary or unused services should be disabled — and change that default relay setting! Most other servers, and almost all client PCs, should have port 25 disabled. Implement a spam filter or other secure mail gateway. Also, consider the following user-awareness tips: Never unsubscribe or reply to spam email. Unsubscribe links in spam emails are often used to confirm the legitimacy of your email address, which can then be added to mass-mailing lists that are sold to other spammers. And, as tempting as it is to tell a spammer what you really think of his or her irresistible offer to enhance your social life or improve your financial portfolio, most spammers don't actually read your replies and (unfortunately) aren't likely to follow your suggestion that they jump off a cliff. Although legitimate offers from well-known retailers or newsletters from professional organizations may be thought of as spam by many people, it's likely that, at some point, a recipient of such a mass mailing actually signed up for that stuff — so it's technically not spam. Everyone seems to want your email address whenever you fill out an application for something, and providing your email address often translates to an open invitation for them to tell you about every sale from here to eternity. In such cases, senders are required by U.S. law to provide an Unsubscribe hyperlink in their mass mailings, and clicking it does remove the recipient from future mailings. Don't send auto-reply messages to Internet email addresses (if possible). Mail servers can be configured not to send auto-reply messages (such as out-of-office messages) to Internet email addresses. However, this setting may not be (and probably isn't) practical in your organization. Be aware of the implications — auto-reply rules don't discriminate against spammers, so the spammers know when you're on vacation, too! Get a firewall for your home computer before you connect it to the Internet. This admonishment is particularly true if you're using a high-speed cable or DSL modem. Typically, a home computer that has high-speed access will be scanned within minutes of being connected to the Internet. And if it isn't protected by a firewall, this computer will almost certainly be compromised and become an unsuspecting zombie in some spammer's bot-net army (over 250,000 new zombies are added to the Internet every day!). Then, you'll become part of the problem because your home computer and Internet bandwidth are used to send spam and phishing emails to thousands of other victims around the world, and you'll be left wondering why your brand-new state-of-the-art home computer is suddenly so slow and your blazing new high-speed Internet connection isn't so high-speed just two weeks after you got it. Your end users don't have to be CISSP-certified to secure their home computers. A simple firewall software package that has a basic configuration is usually enough to deter the majority of today's hackers — most are using automated tools to scan the Internet and don't bother to slow down for a computer that presents even the slightest challenge. Size matters in these bot-net armies, and far too many unprotected computers are out there to waste time (even a few minutes) defeating your firewall. Spam is only the tip of the iceberg. Get ready for emerging threats such as SPIM (spam over instant messaging) and SPIT (spam over Internet telephony) that will up the ante in the battle for messaging security. Other email security considerations include malicious code contained in attachments, lack of privacy, and lack of authentication. These considerations can be countered by implementing antivirus scanning software, encryption, and digital signatures, respectively. Several applications employing various cryptographic techniques have been developed to provide confidentiality, integrity, authentication, non-repudiation, and access control for email communications. Secure Multipurpose Internet Mail Extensions (S/MIME): S/MIME is a secure method of sending email incorporated into several popular browsers and email applications. S/MIME provides confidentiality and authentication by using the RSA asymmetric key system, digital signatures, and X.509 digital certificates. S/MIME complies with the Public Key Cryptography Standard (PKCS) #7 format, and an Internet Engineering Task Force (IETF) specification. MIME Object Security Services (MOSS): MOSS provides confidentiality, integrity, identification and authentication, and non-repudiation by using MD2 or MD5, RSA asymmetric keys, and DES. MOSS has never been widely implemented or used, primarily because of the popularity of PGP. Privacy Enhanced Mail (PEM): PEM was proposed as a PKCS-compliant standard by the IETF, but has never been widely implemented or used. It provides confidentiality and authentication by using 3DES for encryption, MD2 or MD5 message digests, X.509 digital certificates, and the RSA asymmetric system for digital signatures and secure key distribution. Pretty Good Privacy (PGP): PGP is a popular email encryption application. It provides confidentiality and authentication by using the IDEA Cipher for encryption and the RSA asymmetric system for digital signatures and secure key distribution. Instead of a central Certificate Authority (CA), PGP uses a decentralized trust model (in which the communicating parties implicitly trust each other) which is ideally suited for smaller groups to validate user identity (instead of using PKI infrastructure, which can be costly and difficult to maintain). Today, two basic versions of PGP software are available: a commercial version from Symantec Corporation, and an open-source version, GPG.
View ArticleArticle / Updated 08-02-2018
Embedded devices encompass the wide variety of systems and devices that are Internet connected. Mainly, we’re talking about devices that are not human connected in the computing sense. Examples of such devices include Automobiles and other vehicles. Home appliances, such as clothes washers and dryers, ranges and ovens, refrigerators, thermostats, televisions, video games, video surveillance systems, and home automation systems. Medical care devices, such as IV infusion pumps and patient monitoring. Heating, ventilation, and air conditioning (HVAC) systems. Commercial video surveillance and key card systems. Automated payment kiosks, fuel pumps, and automated teller machines (ATMs). Network devices such as routers, switches, modems, firewalls, and so on. These devices often run embedded systems, which are specialized operating systems designed to run on devices lacking computer-like human interaction through a keyboard or display. They still have an operating system that is very similar to that found on endpoints like laptops and mobile devices. Some of the design defects in this class of device include Lack of a security patching mechanism. Most of these devices utterly lack any means for remediating security defects that are found after manufacture. Lack of anti-malware mechanisms. Most of these devices have no built-in defenses at all. They’re completely defenseless against attack by an intruder. Lack of robust authentication. Many of these devices have simple, easily-guessed default login credentials that cannot be changed (or, at best, are rarely changed by their owners). Lack of monitoring capabilities. Many of these devices lack any means for sending security and event alerts. Because the majority of these devices cannot be altered, mitigation of these defects typically involves isolation of these devices on separate, heavily guarded networks that have tools in place to detect and block attacks. Many manufacturers of embedded, network-enabled devices do not permit customers to alter their configuration or apply security settings. This compels organizations to place these devices on separate, guarded networks.
View ArticleArticle / Updated 08-02-2018
Mobile systems include the operating systems and applications on smartphones, tablets, phablets, smart watches, and wearables. The most popular operating system platforms for mobile systems are Apple iOS, Android, and Windows 10. The vulnerabilities that are found on mobile systems include Lack of robust resource access controls. History has shown us that some mobile OSs lack robust controls that govern which apps are permitted to access resources on the mobile device, including: Locally stored data Contact list Camera roll Email messages Location services Camera Microphone Insufficient security screening of applications. Some mobile platform environments are quite good at screening out applications that contain security flaws or outright break the rules, but other platforms have more of an “anything goes” policy, apparently. The result is buyer beware: Your mobile app may be doing more than advertised. Security settings defaults too lax. Many mobile platforms lack enforcement of basic security and, for example, don't require devices to automatically lock or have lock codes. In a managed corporate environment, the use of a mobile device management (MDM) system can mitigate many or all of these risks. For individual users, mitigation is up to individual users to do the right thing and use strong security settings.
View ArticleArticle / Updated 08-02-2018
Basic computer (system) architecture refers to the structure of a computer system and comprises its hardware, firmware, and software. The CompTIA A+ certification exam covers computer architecture in depth and is an excellent way to prepare for this portion of the CISSP examination. Hardware Hardware consists of the physical components in computer architecture. The main components of the computer architecture include the CPU, memory, and bus. CPU The CPU (Central Processing Unit) or microprocessor is the electronic circuitry that performs a computer’s arithmetic, logic, and computing functions. As shown in the figure, the main components of a CPU include Arithmetic Logic Unit (ALU): Performs numerical calculations and comparative logic functions, such as ADD, SUBTRACT, DIVIDE, and MULTIPLY. Bus Interface Unit (BIU): Supervises data transfers over the bus system between the CPU and I/O devices. Control Unit: Coordinates activities of the other CPU components during program execution. Decode Unit: Converts incoming instructions into individual CPU commands. Floating-Point Unit (FPU): Handles higher math operations for the ALU and control unit. Memory Management Unit (MMU): Handles addressing and cataloging data that's stored in memory and translates logical addressing into physical addressing. Pre-Fetch Unit: Preloads instructions into CPU registers. Protection Test Unit (PTU): Monitors all CPU functions to ensure that they’re properly executed. Registers: Hold CPU data, addresses, and instructions temporarily, in special buffers. The basic operation of a microprocessor consists of two distinct phases: fetch and execute. (It’s not too different from what your dog does: You throw the stick, and he fetches the stick.) During the fetch phase, the CPU locates and retrieves a required instruction from memory. During the execute phase, the CPU decodes and executes the instruction. These two phases make up a basic machine cycle that’s controlled by the CPU clock signals. Many complex instructions require more than a single machine cycle to execute. The four operating states for a computer (CPU) are Operating (or run) state: The CPU executes an instruction or instructions. Problem (or application) state: The CPU calculates a solution to an application-based problem. During this state, only a limited subset of instructions (non-privileged instructions) is available. Supervisory state: The CPU executes a privileged instruction, meaning that instruction is available only to a system administrator or other authorized user/process. Wait state: The CPU hasn’t yet completed execution of an instruction and must extend the cycle. The two basic types of CPU designs used in modern computer systems are Complex-Instruction-Set Computing (CISC): Can perform multiple operations per single instruction. Optimized for systems in which the fetch phase is the longest part of the instruction execution cycle. CPUs that use CISC include Intel x86, PDP-11, and Motorola 68000. Reduced-Instruction-Set Computing (RISC): Uses fewer, simpler instructions than CISC architecture, requiring fewer clock cycles to execute. Optimized for systems in which the fetch and execute phases are approximately equal. CPUs that have RISC architecture include Alpha, PowerPC, and SPARC. Microprocessors are also often described as scalar or superscalar. A scalar processor executes a single instruction at a time. A superscalar processor can execute multiple instructions concurrently. Finally, many systems (microprocessors) are classified according to additional functionality (which must be supported by the installed operating system): Multitasking: Alternates the execution of multiple subprograms or tasks on a single processor. Multiprogramming: Alternates the execution of multiple programs on a single processor. Multiprocessing: Executes multiple programs on multiple processors simultaneously. Two related concepts are multistate and multiuser systems that, more correctly, refer to operating system capabilities: Multistate: The operating system supports multiple operating states, such as single-user and multiuser modes in the UNIX/Linux world and Normal and Safe modes in the Windows world. Multiuser: The operating system can differentiate between users. For example, it provides different shell environments, profiles, or privilege levels for each user, as well as process isolation between users. An important security issue in multiuser systems involves privileged accounts, and programs or processes that run in a privileged state. Programs such as su (UNIX/Linux) and RunAs (Windows) allow a user to switch to a different account, such as root or administrator, and execute privileged commands in this context. Many programs rely on privileged service accounts to function properly. Utilities such as IBM’s Superzap, for example, are used to install fixes to the operating system or other applications. Bus The bus is a group of electronic conductors that interconnect the various components of the computer, transmitting signals, addresses, and data between these components. Bus structures are organized as follows: Data bus: Transmits data between the CPU, memory, and peripheral devices. Address bus: Transmits addresses of data and instructions between the CPU and memory. Control bus: Transmits control information (device status) between the CPU and other devices. Main memory Main memory (also known as main storage) is the part of the computer that stores programs, instructions, and data. The two basic types of physical (or real — as opposed to virtual — more on that later) memory are Random Access Memory (RAM): Volatile memory (data is lost if power is removed) is memory that can be directly addressed and whose stored data can be altered. RAM is typically implemented in a computer’s architecture as cache memory and primary memory. The two main types of RAM are Dynamic RAM (DRAM): Must be refreshed (the contents rewritten) every two milliseconds because of capacitance decay. Refreshing is accomplished by using multiple clock signals known as multiphase clock signals. Static RAM (SRAM): Faster than DRAM and uses circuit latches to represent data, so it doesn’t need to be refreshed. Because SRAM doesn’t need to be refreshed, a single-phase clock signal is used. Read-Only Memory (ROM): Nonvolatile memory (data is retained, even if power is removed) is memory that can be directly addressed but whose stored data can’t be easily altered. ROM is typically implemented in a computer’s architecture as firmware (which we discuss in the following section). Variations of ROM include Programmable Read-Only Memory (PROM): This type of ROM can’t be rewritten. Erasable Programmable Read-Only Memory (EPROM): This type of ROM is erased by shining ultraviolet light into the small window on the top of the chip. (No, we aren’t kidding.) Electrically Erasable Programmable Read-Only Memory (EEPROM): This type of ROM was one of the first that could be changed without UV light. Also known as Electrically Alterable Read-Only Memory (EAROM). Flash Memory: This type of memory is used in USB thumb drives. Be sure you don’t confuse the term “main storage” with the storage provided by hard drives. Secondary Memory Secondary memory (also known as secondary storage) is a variation of these two basic types of physical memory. It provides dynamic storage on nonvolatile magnetic media such as hard drives, solid-state drives, or tape drives (which are considered sequential memory because data can’t be directly accessed — instead, you must search from the beginning of the tape). Virtual memory (such as a paging file, swap space, or swap partition) is a type of secondary memory that uses both installed physical memory and available hard-drive space to present a larger apparent memory space to the CPU than actually exists in main storage. Two important security concepts associated with memory are the protection domain (also called protected memory) and memory addressing. A protection domain prevents other programs or processes from accessing and modifying the contents of address space that’s already been assigned to another active program or process. This protection can be performed by the operating system or implemented in hardware. The purpose of a protection domain is to protect the memory space assigned to a process so that no other process can read from the space or alter it. The memory space occupied by each process can be considered private. Memory space describes the amount of physical memory available in a computer system (for example, 2 GB), whereas address space specifies where memory is located in a computer system (a memory address). Memory addressing describes the method used by the CPU to access the contents of memory. A physical memory address is a hard-coded address assigned to physically installed memory. It can only be accessed by the operating system that maps physical addresses to virtual addresses. A virtual (or symbolic) memory address is the address used by applications (and programmers) to specify a desired location in memory. Common virtual memory addressing modes include Base addressing: An address used as the origin for calculating other addresses. Absolute addressing: An address that identifies a location without reference to a base address — or it may be a base address itself. Indexed addressing: Specifies an address relative to an index register. (If the index register changes, the resulting memory location changes.) Indirect addressing: The specified address contains the address to the final desired location in memory. Direct addressing: Specifies the address of the final desired memory location. Don’t confuse the concepts of virtual memory and virtual addressing. Virtual memory combines physical memory and hard drive space to create more apparent memory (or memory space). Virtual addressing is the method used by applications and programmers to specify a desired location in physical memory. Firmware Firmware is a program or set of computer instructions stored in the physical circuitry of ROM memory. These types of programs are typically changed infrequently or not at all. In servers and user workstations, firmware usually stores the initial computer instructions that are executed when the server or workstation is powered on; the firmware starts the CPU and other onboard chips, and establishes communications by using the keyboard, monitor, network adaptor, and hard drive. The firmware retrieves blocks of data from the hard drive that are then used to load and start the operating system. A computer’s BIOS is a common example of firmware. BIOS, or Basic Input-Output System, contains instructions needed to start a computer when it’s first powered on, initialize devices, and load the operating system from secondary storage (such as a hard drive). Firmware is also found in devices such as smartphones, tablets, DSL/cable modems, and practically every other type of Internet-connected device, such as automobiles, thermostats, and even your refrigerator. Firmware is typically stored on one or more ROM chips on a computer’s motherboard (the main circuit board containing the CPU(s), memory, and other circuitry). Software Software includes the operating system and programs or applications that are installed on a computer system. Operating Systems A computer operating system (OS) is the software that controls the workings of a computer, enabling the computer to be used. The operating system can be thought of as a logical platform, through which other programs can be run to perform work. The main components of an operating system are Kernel: The core component of the operating system that allows processes, control of hardware devices, and communications to external devices and systems to run. Device drivers: Software modules used by the kernel to communicate with internal and external devices that may be connected to the computer. Tools: Independent programs that perform specific maintenance functions, such as filesystem repair or network testing. Tools can be run automatically or manually. The operating system controls a computer’s resources. The main functions of the operating system are Process management: Sets up an environment in which multiple independent processes (programs) can run. Resource management: Controls access to all available resources, using schemes that may be based on priority or efficiency. I/O device management: Controls communication to all devices that are connected to the computer, including hard drives, printers, monitors, keyboard, mouse, and so on. Memory management: Controls the allocation and access to main memory (RAM), allocating it to processes, as well as general uses such as disk caching. File management: Controls the file systems that are present on hard drives and other types of devices and performs all file operations on behalf of individual processes. Communications management: Controls communications on all available communications media on behalf of processes. Virtualization A virtual machine is a software implementation of a computer, enabling many running copies of an operating system to execute on a single running computer without interfering with each other. Virtual machines are typically controlled by a hypervisor, a software program that allocates resources for each resident operating system (called a guest). A hypervisor serves as an operating system for multiple operating systems. One of the strengths of virtualization is that the resident operating system has little or no awareness of the fact that it’s running as a guest — instead, it may believe that it has direct control of the computer’s hardware. Only your system administrator knows for sure. Containerization A container is a lightweight, standalone executable package of a piece of software that includes everything it needs to run. A container is essentially a bare-bones virtual machine that only has the minimum software installed necessary to deploy a given application. Popular container platforms include Docker and Kubernetes.
View ArticleArticle / Updated 08-01-2018
Various security controls and countermeasures that should be applied to security architecture, as appropriate, include defense in depth, system hardening, implementation of heterogeneous environments, and designing system resilience. Defense in depth Defense in depth is a strategy for resisting attacks. A system that employs defense in depth will have two or more layers of protective controls that are designed to protect the system or data stored there. An example defense-in-depth architecture would consist of a database protected by several components, such as: Screening router Firewall Intrusion prevention system Hardened operating system OS-based network access filtering All the layers listed here help to protect the database. In fact, each one of them by itself offers nearly complete protection. But when considered together, all these controls offer a varied (in effect, deeper) defense, hence the term defense in depth. Defense in depth refers to the use of multiple layers of protection. System hardening Most types of information systems, including computer operating systems, have several general-purpose features that make it easy to set up the systems. But systems that are exposed to the Internet should be “hardened,” or configured according to the following concepts: Remove all unnecessary components. Remove all unnecessary accounts. Close all unnecessary network listening ports. Change all default passwords to complex, difficult to guess passwords. All necessary programs should run at the lowest possible privilege. Security patches should be installed as soon as they are available. System hardening guides can be obtained from a number of sources, such as: The Center for Internet Security Information Assurance Support Environment, from the U.S. Defense Information Security Agency. Software and operating system vendors often provide their own hardening guides, which may also be useful. Heterogeneous environment Rather than containing systems or components of a single type, a heterogeneous environment contains a variety of different types of systems. Contrast an environment that consists only of Windows 2016 servers and the latest SQL Server and IIS Server, to a more complex environment that contains Windows, Linux, and Solaris servers with Microsoft SQL Server, MySQL, and Oracle databases. The advantage of a heterogeneous environment is its variety of systems; for one thing, the various types of systems probably won’t possess common vulnerabilities, which makes them harder to attack. However, the complexity of a heterogeneous environment also negatively impacts security, as there are more components that potentially can fail or be compromised. The weakness of a homogeneous environment (one where all of the systems are the same) is its uniformity. If a weakness in one of the systems is discovered, all systems may have the weakness. If one of the systems is attacked and compromised, all may be attacked and compromised. You can liken homogeneity to a herd of animals; if they are genetically identical, then they may all be susceptible to a disease that could wipe out the entire herd. If they are genetically diverse, then perhaps some will be able to survive the disease. System resilience The resilience of a system is a measure of its ability to keep running, even under less-than-ideal conditions. Resilience is important at all levels, including network, operating system, subsystem (such as database management system or web server), and application. Resilience can mean a lot of different things. Here are some examples: Filter malicious input: System can recognize and reject input that may be an attack. Examples of suspicious input include what you get typically in an injection attack, buffer-overflow attack, or Denial of Service attack. Data replication: System copies critical data to a separate storage system in the event of component failure. Redundant components: System contains redundant components that permit the system to continue running even when hardware failures or malfunctions occur. Examples of redundant components include multiple power supplies, multiple network interfaces, redundant storage techniques such as RAID, and redundant server architecture techniques such as clustering. Maintenance hooks: Hidden, undocumented features in software programs that are intended to inappropriately expose data or functions for illicit use. Security countermeasures: Knowing that systems are subject to frequent or constant attack, systems architects need to include several security countermeasures in order to minimize system vulnerability. Such countermeasures include Revealing as little information about the system as possible. For example, don’t permit the system to ever display the version of operating system, database, or application software that’s running. Limiting access to only those persons who must use the system in order to fulfill needed organizational functions. Disabling unnecessary services in order to reduce the number of attack targets. Using strong authentication in order to make it as difficult as possible for outsiders to access the system.
View ArticleArticle / Updated 08-01-2018
Evaluation criteria provide a standard for quantifying the security of a computer system or network. These criteria include the Trusted Computer System Evaluation Criteria (TCSEC), Trusted Network Interpretation (TNI), European Information Technology Security Evaluation Criteria (ITSEC), and the Common Criteria. Trusted Computer System Evaluation Criteria (TCSEC) The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, is part of the Rainbow Series developed for the U.S. DoD by the National Computer Security Center (NCSC). It’s the formal implementation of the Bell-LaPadula model. The evaluation criteria were developed to achieve the following objectives: Measurement: Provides a metric for assessing comparative levels of trust between different computer systems. Guidance: Identifies standard security requirements that vendors must build into systems to achieve a given trust level. Acquisition: Provides customers a standard for specifying acquisition requirements and identifying systems that meet those requirements. The four basic control requirements identified in the Orange Book are Security policy: The rules and procedures by which a trusted system operates. Specific TCSEC requirements include Discretionary access control (DAC): Owners of objects are able to assign permissions to other subjects. Mandatory access control (MAC): Permissions to objects are managed centrally by an administrator. Object reuse: Protects confidentiality of objects that are reassigned after initial use. For example, a deleted file still exists on storage media; only the file allocation table (FAT) and first character of the file have been modified. Thus residual data may be restored, which describes the problem of data remanence. Object-reuse requirements define procedures for actually erasing the data. Labels: Sensitivity labels are required in MAC-based systems. Specific TCSEC labeling requirements include integrity, export, and subject/object labels. Assurance: Guarantees that a security policy is correctly implemented. Specific TCSEC requirements (listed here) are classified as operational assurance requirements: System architecture: TCSEC requires features and principles of system design that implement specific security features. System integrity: Hardware and firmware operate properly and are tested to verify proper operation. Covert channel analysis: TCSEC requires covert channel analysis that detects unintended communication paths not protected by a system’s normal security mechanisms. A covert storage channel conveys information by altering stored system data. A covert timing channel conveys information by altering a system resource’s performance or timing. A systems or security architect must understand covert channels and how they work in order to prevent the use of covert channels in the system environment. º Trusted facility management: The assignment of a specific individual to administer the security-related functions of a system. Closely related to the concepts of least privilege, separation of duties, and need-to-know. º Trusted recovery: Ensures that security isn’t compromised in the event of a system crash or failure. This process involves two primary activities: failure preparation and system recovery. º Security testing: Specifies required testing by the developer and the National Computer Security Center (NCSC). º Design specification and verification: Requires a mathematical and automated proof that the design description is consistent with the security policy. º Configuration management: Identifying, controlling, accounting for, and auditing all changes made to the Trusted Computing Base (TCB) during the design, development, and maintenance phases of a system’s lifecycle. º Trusted distribution: Protects a system during transport from a vendor to a customer. Accountability: The ability to associate users and processes with their actions. Specific TCSEC requirements include Identification and authentication (I&A): Systems need to track who performs what activities. Trusted Path: A direct communications path between the user and the Trusted Computing Base (TCB) that doesn’t require interaction with untrusted applications or operating-system layers. Audit: Recording, examining, analyzing, and reviewing security-related activities in a trusted system. Documentation: Specific TCSEC requirements include Security Features User’s Guide (SFUG): User’s manual for the system. Trusted Facility Manual (TFM): System administrator’s and/or security administrator’s manual. Test documentation: According to the TCSEC manual, this documentation must be in a position to “show how the security mechanisms were tested, and results of the security mechanisms’ functional testing.” Design documentation: Defines system boundaries and internal components, such as the Trusted Computing Base (TCB). The Orange Book defines four major hierarchical classes of security protection and numbered subclasses (higher numbers indicate higher security): D: Minimal protection C: Discretionary protection (C1 and C2) B: Mandatory protection (B1, B2, and B3) A: Verified protection (A1) These classes are further defined in this table. TCSEC Classes Class Name Sample Requirements D Minimal protection Reserved for systems that fail evaluation. C1 Discretionary protection (DAC) System doesn’t need to distinguish between individual users and types of access. C2 Controlled access protection (DAC) System must distinguish between individual users and types of access; object reuse security features required. B1 Labeled security protection (MAC) Sensitivity labels required for all subjects and storage objects. B2 Structured protection (MAC) Sensitivity labels required for all subjects and objects; trusted path requirements. B3 Security domains (MAC) Access control lists (ACLs) are specifically required; system must protect against covert channels. A1 Verified design (MAC) Formal Top-Level Specification (FTLS) required; configuration management procedures must be enforced throughout entire system lifecycle. Beyond A1 Self-protection and reference monitors are implemented in the Trusted Computing Base (TCB). TCB verified to source-code level. You don’t need to know specific requirements of each TCSEC level for the CISSP exam, but you should know at what levels DAC and MAC are implemented and the relative trust levels of the classes, including numbered subclasses. Major limitations of the Orange Book include that It addresses only confidentiality issues. It doesn’t include integrity and availability. It isn’t applicable to most commercial systems. It emphasizes protection from unauthorized access, despite statistical evidence that many security violations involve insiders. It doesn’t address networking issues. Trusted Network Interpretation (TNI) Part of the Rainbow Series, like TCSEC (discussed in the preceding section), Trusted Network Interpretation (TNI) addresses confidentiality and integrity in trusted computer/communications network systems. Within the Rainbow Series, it’s known as the Red Book. Part I of the TNI is a guideline for extending the system protection standards defined in the TCSEC (the Orange Book) to networks. Part II of the TNI describes additional security features such as communications integrity, protection from denial of service, and transmission security. European Information Technology Security Evaluation Criteria (ITSEC) Unlike TCSEC, the European Information Technology Security Evaluation Criteria (ITSEC) addresses confidentiality, integrity, and availability, as well as evaluating an entire system, defined as a Target of Evaluation (TOE), rather than a single computing platform. ITSEC evaluates functionality (security objectives, or why; security-enforcing functions, or what; and security mechanisms, or how) and assurance (effectiveness and correctness) separately. The ten functionality (F) classes and seven evaluation (E) (assurance) levels are listed in the following table. ITSEC Functionality (F) Classes and Evaluation (E) Levels mapped to TCSEC levels (F) Class (E) Level Description NA E0 Equivalent to TCSEC level D F-C1 E1 Equivalent to TCSEC level C1 F-C2 E2 Equivalent to TCSEC level C2 F-B1 E3 Equivalent to TCSEC level B1 F-B2 E4 Equivalent to TCSEC level B2 F-B3 E5 Equivalent to TCSEC level B3 F-B3 E6 Equivalent to TCSEC level A1 F-IN NA TOEs with high integrity requirements F-AV NA TOEs with high availability requirements F-DI NA TOEs with high integrity requirements during data communication F-DC NA TOEs with high confidentiality requirements during data communication F-DX NA Networks with high confidentiality and integrity requirements You don’t need to know specific requirements of each ITSEC level for the CISSP exam, but you should know how the basic functionality levels (F-C1 through F-B3) and evaluation levels (E0 through E6) correlate to TCSEC levels. Common Criteria The Common Criteria for Information Technology Security Evaluation (usually just called Common Criteria) is an international effort to standardize and improve existing European and North American evaluation criteria. The Common Criteria has been adopted as an international standard in ISO 15408. The Common Criteria defines eight evaluation assurance levels (EALs), which are listed in the following table. The Common Criteria Level TCSEC Equivalent ITSEC Equivalent Description EAL0 N/A N/A Inadequate assurance EAL1 N/A N/A Functionally tested EAL2 C1 E1 Structurally tested EAL3 C2 E2 Methodically tested and checked EAL4 B1 E3 Methodically designed, tested, and reviewed EAL5 B2 E4 Semi-formally designed and tested EAL6 B3 E5 Semi-formally verified design and tested EAL7 A1 E6 Formally verified design and tested You don’t need to know specific requirements of each Common Criteria level for the CISSP exam, but you should understand the basic evaluation hierarchy (EAL0 through EAL7, in order of increasing levels of trust).
View Article