Articles From Timothy L. Warner
Filter Results
Article / Updated 03-27-2020
If you’ve spent any time in Azure Monitor, you’ve seen some of the myriad log files that your Azure resources create. Think of all the ways that data is represented in Microsoft Azure, and imagine a way to put all your logs in a single data lake and run queries against it seamlessly. Azure Log Analytics is a platform in which you do just that: aggregate VM and Azure resource log files into a single data lake (called a Log Analytics workspace) and then run queries against the data, using a Microsoft-created data access language called Kusto (pronounced KOO-stoh) Query Language (KQL). You’ll find that Log Analytics somehow normalizes all these different log streams into a tabular structure. You’ll also discover that KQL is similar to Structured Query Language (SQL), the data access language that is standard for relational databases. Creating a Log Analytics workspace The first order of business is to deploy a Log Analytics workspace. Then you can on-board as few or as many Azure resources to the workspace as you need. You can also deploy more than one Log Analytics workspace to keep your log data separate. To create a new Azure Log Analytics workspace, follow these steps: In the Azure portal, browse to the Log Analytics Workspaces blade, and click Add. The Log Analytics workspace blade appears. Complete the Log Analytics workspace blade. You'll need to provide the following details: Workspace name Subscription name Resource group name Location Pricing tier Click OK to create the workspace. Click OK to submit your deployment. Log Analytics has a free tier as well as several paid tiers. The biggest free tier limitations are Data ingestion limit of 5 GB per month 30-day data retention limit Connecting data sources to the Azure Log Analytics workspace With your workspace online, you’re ready to on-board Azure resources into said workspace. To connect Azure resources to the workspace, go back to Monitor Diagnostic settings, enable diagnostics, and point the log streams to your workspace. You can connect VMs to the workspace directly from the workspace’s Settings menu. Follow these steps: In your Log Analytics workspace settings menu, click Virtual Machines. You see a list of all VMs in the workspace’s region. You can see which VMs are connected to the workspace and which are not. If necessary, use the filter controls until you see the VM you want to connect. You can link a VM to only one workspace at a time. Below for example,the vm1 virtual machine is linked to another workspace. Select the desired VM, and click Connect. Behind the scenes, Azure deploys the Log Analytics agent (formerly called Microsoft Monitoring Agent) to the VM. Verify that the VM is connected to the workspace. You can see this information in your workspace settings. Or you can revisit your VM’s Extensions blade and verify that the MicrosoftMonitoringAgent extension is installed. You should know that Log Analytics can on-board on-premises VMs, particularly those managed by Systems Center Operations Manager, just as it can native cloud Linux and Windows Server VMs. You can disconnect a VM from its current workspace and connect it to another one. This operation is trivial, taking only two minutes or so to complete. To do this, simply select the VM from within the workspace and click Disconnect from the toolbar. Writing KQL queries You need to know a bit about how to access your Log Analytics workspace data with KQL. KQL is fast and easy to learn, and it should seem familiar to you if you’ve used Splunk Search Processing Language, SQL, PowerShell, or Bash shell. Touring the Log Search interface You can get to the Log Search interface by opening Monitor and selecting the Logs blade. Another way to get there (is to go to your Log Analytics workspace and click the Log setting. A third method is to use the Log Analytics Query Playground, where you can work with an enormous data set, getting to know Log Analytics before generating a meaningful data set. Follow these steps to run some sample KQL queries: Go to the Log Analytics portal demo. This site is authenticated, but don’t worry: You’re using Microsoft’s subscription, not your own. Expand some of the tables in the Schema list. There’s a lot in this list. Log Analytics normalizes all incoming data streams and projects them into a table-based structure. Expand the LogManagement category; then expand the Alert table, where you can use KQL to query Azure Monitor alerts. The t entries (shown under the expanded SecurityEvent item below) are properties that behave like columns in a relational database table. On the Log Search toolbar, click Query Explorer, expand the Favorites list, and run the query Security Events Count by Computer During the Last 12 Hours. This environment is a sandbox. Microsoft has not only on-boarded untold resources into this workspace but also written sample queries to let you kick the tires. In the results list, click Chart to switch from Table to Chart view. You can visualize your query results automatically with a single button click. Not every results set lends itself to graphical representation, but the capability is tremendous. Click Export, and save your query results (displayed columns only) to a CSV file. Note the link to Power BI, Microsoft’s cloud-based business intelligence/dashboard generation tool. Writing basic KQL queries For fun, let’s try an obstacle course of common KQL queries. Click the plus sign in the Log Search query interface to open a new tab — a multitab interface like those in Visual Studio and Visual Studio Code. To get a feel for a table, you can instruct Azure to display any number of rows in no particular order. To display 10 records from the SecurityEvent table, for example, use the following command: SecurityEvent | take 10 Did you notice that the query editor attempted to autocomplete your query as you typed? Take advantage of that convenience by pressing Tab when you see the appropriate autocomplete choice appear. Use the search keyword to perform a free-text query. The following query looks in the SecurityEvent table for any records that include the string "Cryptographic": search in (SecurityEvent) "Cryptographic" | take 20 When you press Enter, you’ll doubtless notice the pipe character (|). This character functions the same way here as it does in PowerShell or the bash shell. Output from one query segment is passed to the next segment via pipe — a powerful construct for sure. You can ramp up the complexity by finishing with filtering and sorting. The following code both filters on a condition and sorts the results in a descending manner based on time: SecurityEvent | where Level == 8 and EventID == 4672 | sort by TimeGenerated desc If you’re thinking, “Wow, these KQL queries act an awful lot like SQL!” you’re right on the money. Welcome to Log Analytics!
View ArticleArticle / Updated 03-27-2020
Ready to work with Microsoft Azure’s technology? Here, you learn how to deploy Linux and Windows VMs (virtual machines) from Azure Marketplace. Deploying a Linux VM It’s understandable that most Azure experts to get a kick out of the fact that Microsoft supports Linux natively in Azure. It was inconceivable up until a handful of years ago that we’d be able to run non-Windows VMs by using Microsoft technologies. Here, you learn how to deploy a simple Linux web server in a new virtual network, using an Ubuntu Linux 18.04 Long-Term Support (LTS) VM image from the Azure Marketplace. Deploying from the Azure portal Follow these steps to deploy a Linux VM in the Azure portal: Choose Favorites→Create a Resource, and choose Ubuntu Server 18.04 LTS. Alternatively, you can browse to the Virtual Machines blade and click Add to deploy a new resource. If Ubuntu doesn’t show up in the Azure Marketplace list, type its name to find it in the VM template gallery. On the Create a Virtual Machine blade, complete the Basics tab. The following image shows the Create a Virtual Machine blade. Oh, boy, it’s tough not to feel overwhelmed when you see all the tabs: Basics, Disks, Networking, Management, Advanced, Tags, and Review + Create. Use the following information to complete the other fields: Virtual Machine Name: The name needs to be unique only within your resource group. Pay attention to the validation helpers that pop up when you place your cursor in a field. Availability Options: If you don’t see both Availability Zone and Availability Set in the drop-down menu, choose a different region. (East us 2 is a good choice.) Because this is a practice deployment, you can choose No Infrastructure Redundancy Required. Image: You specified the Ubuntu image in step 1, but if you’re curious about other options, open the drop-down menu to see the most popular VM images. You can also click Browse All Public and Private Images to view all templates in the Azure Marketplace. Size: For now, accept the Microsoft-recommended default VM size. Authentication Type: Linux VMs are different from Windows because you can use Secure Shell (SSH) key-based authentication or password-based authentication. For this exercise, choose password. You should choose a creative default administrator account name. ARM won’t let you use commonly guessed administrator account names such as root an, admin. Public Inbound Ports: For testing purposes, associate a public IP address with this VM, and connect to the instance via SSH. You’ll tighten network security group security later to prevent unauthorized access attempts by Internet-based bad actors. Select Inbound Ports: Choose SSH. Complete the Disks tab. This tab is where you make an initial choice about the VM’s OS and data disks. Choose Standard HDD to save money. The number of data disks you can create depends on your chosen VM instance size. You can always add data disks later, so for now, proceed to the next blade. Note that the default number of data disks Azure provides the new VM is zero; it’s up to you as administrator to decide whether you want to use them. Complete the Networking tab. You have some crucially important decisions to make in terms of where you want to place your VM and how you want to configure its connectivity. Here are the configuration options: Virtual Network: The template deploys a new virtual network by default. That’s what you want here, so leave this setting alone. If you place your VM on the wrong virtual network, you’ll need to redeploy it to move it, which is a pain, so try to make the right choice the first time around. Subnet: Leave this setting at its default. Public IP: Leave this setting at its default. You do in fact want a public IP address, at least initially. NIC Network Security Group: Select Basic. Public Inbound Ports: Allow Selected Ports. Select Inbound Ports: Select SSH. Accelerated Networking: Not all VM templates support this option. For VM templates that support this feature, accelerated networking gives the VMs a network speed boost by allowing the VM to use the Azure networking backbone more directly. Load Balancing: Select No. Complete the Management tab. Ensure that Boot Diagnostics is enabled and all other options are off. Boot Diagnostics is required to use the VM serial console, so it’s always a good idea to enable it sooner rather than later. Review the Advanced and Tags tabs. You don’t need any of these options right now, but they’re worth examining. Extensions allow you to inject agent software and/or management scripts into the VM. You can handle configuration after deployment, however. Taxonomic tags are a way to track resources across subscriptions for accounting purposes. Submit the deployment, and monitor progress. Click the Review + Create tab; then click Create after ARM informs you that your selections passed validation. If an error occurs, ARM places a red dot next to the configuration blade(s) where it detects invalid settings. Connecting to the VM Use Azure Cloud Shell to make an SSH connection to your VM. Follow these steps: In Azure portal, browse to the Overview blade of your newly created VM, and note the public IP address. You see a VM’s configuration below. Open Cloud Shell, and connect to your Linux VM by specifying your default administrator account name and the VM’s public IP address. To connect to a Linux VM at 13.68.104.88 using my tim admin account, you type: ssh [email protected] Type yes to accept the VM’s public key and then type your password to enter the SSH session. At this point, you’re working directly on the Linux VM. You can get help for any Linux command by typing man . Scroll through the help document with your arrow keys, and press Q to quit. Deploying a Windows Server VM Here, you learn how to create a Windows Server VM by using Visual Studio 2019 Community Edition and an ARM template. Visual Studio 2019 Community Edition is a free Visual Studio version that you can use for testing, development, and open-source projects. Microsoft does make a Visual Studio version for macOS. This procedure is especially important for you to understand for two reasons: ARM templates form a basis for administrative automation, development, and operations. You’ll use ARM templates to complete most tasks. Setting up your development environment Follow these high-level steps to get your Visual Studio environment set up: Download Visual Studio 2019 Community Edition, and run the installer. You need administrative permissions on your Windows 10 workstation to install the software. Choose the Azure workload. This step is the most important one. Visual Studio is an integrated development environment that supports multiple development languages and frameworks. For the purpose of deploying a Windows Server VM, you need to install the Microsoft Azure software development kits and tools. The image below illustrates the user interface. You can leave the Azure workload components set at their defaults. After installation, open Visual Studio, and log in to your Azure administrator account. When you start Visual Studio 2019, you’ll see a Get Started page. Click Continue Without Code; then open the Cloud Explorer extension by choosing View→Cloud Explorer. Authenticate to Azure Active Directory, and select the Azure subscription(s) you want to work with. Deploying the VM Assuming that you have Visual Studio open and you’re logged into your Azure subscription, you’re ready to rock. Here, you’re deploying a Windows Server VM from the Azure Quickstart Templates gallery. In this example, you use a template definition that includes Managed Disks. Follow these steps: In Visual Studio, choose File→New→Project. The Create a New Project dialog box opens. Search the Visual Studio template gallery for Azure Resource Group, select it, and click Next. Name and save your project. Choose a meaningful project name such as Simple Windows VM, choose your favorite directory location, and click Create. Select the 101-vm-with-standardssd-disk template in the Azure Quickstart Templates gallery and click OK. Here’s the interface. Double-click your azuredeploy.json template file. This action loads the JavaScript Object Notation (JSON) file into your code editor. Pay particular attention to the JSON Outline pane. Browse the ARM template’s contents. The three elements shown in JSON Outline view are parameters: You supply these values to the template at deployment time. Note the allowedValues element on lines 26–30; the template author prepopulated the VM disk types to make validation and deployment simpler. variables: These values represent fixed or dynamic data that is referenced internally within the template. resources: In this deployment, you create four resource types: virtual machine, virtual NIC, virtual network, and public IP address. In Solution Explorer, right-click the project and choose Validate from the shortcut menu. The Validate to Resource Group dialog box opens. Fill in the fields of the dialog box and then click Edit Parameters to supply parameter values. Visual Studio allows you to validate your template before deploying it to Azure. The resource group is the fundamental deployment unit in Azure. Therefore, your deployments must specify a new or existing resource group. Click Validate, and watch the Output window for status messages. Make sure to look behind your Visual Studio application; Azure spawns a PowerShell console session to prompt you to confirm the admin password. The feedback you’re looking for in the Output window is: Template is valid. If the template fails validation, Visual Studio is pretty good about telling you the template code line(s) on which it found an issue. You can debug and retry validation as many times you need to until the template passes. Deploy the VM by right-clicking the project in Solution Explorer and choosing Deploy from the shortcut menu. The shortcut menu contains a reference to your validation configuration. Monitor progress, and verify that the VM exists in the Azure portal. You’ll know that the deployment completed successfully when you see the following status line in the Output window: Successfully deployed template 'azuredeploy.json' to resource group 'your-resource-group'. Connecting to the VM You normally use Remote Desktop Protocol to manage Windows Servers remotely on-premises, and Azure is no different. Browse to your new VM’s Overview blade, and click Connect. The Remote Desktop Connection dialog box opens. You can download the .rdp connection file and open it from here. The steps to make an RDP connection are Click Connect from the Overview blade toolbar. Download the RDP connection file to your local computer. Open the connection using your preferred RDP client software. Microsoft makes a native Remote Desktop Protocol client for macOS; it’s available in the Mac App Store.
View ArticleArticle / Updated 03-27-2020
In a nutshell, Microsoft App Service is a Hypertext Transfer Protocol (HTTP)-based web application hosting service. Think of Microsoft App Service as GoDaddy, but with much more power. The idea is that if you're willing to surrender full control app’s underlying infrastructure (which is what Azure Virtual Machines is for) you’ll receive in exchange Global replication and geoavailability Dynamic autoscaling Native integration into continuous integration/continuous deployment pipelines Yes, App Service uses virtual machines (VMs) under the hood, but you never have to worry about maintaining them; Microsoft does that for you. Instead, you focus nearly exclusively on your source code and your application. Microsoft App Service development frameworks and compute environments Microsoft App Service supports several development frameworks, including the following: .NET Framework .NET Core Java Ruby js Python PHP You can use any of the following as your underlying compute environment in App Service: Windows Server VM Linux VM Docker container Web apps The static or dynamic web application is the most commonly used option in App Service. Your hosted web apps can be linked to cloud- or on-premises databases, API web services, and content delivery networks. API apps An application programming interface (API) is a mechanism that offers programmatic (noninteractive) access to your application by using HTTP requests and responses — a programming paradigm known as Representational State Transfer (REST). Nowadays, Microsoft supports API apps from App Service and the API Management service. Mobile apps A mobile app provides the back end to an iOS or Android application. Azure mobile apps provide features most smartphone consumers have grown to expect as part of the mobile apps they use, such as social media sign-in, push notifications, and offline data synchronization. Azure Logic apps A logic app provides a way for developers to build business workflows without having to know all the underlying APIs in different services. You might create a logic app that triggers whenever someone mentions your company’s name on Twitter, for example. Then this app may perform several actions, such as posting a notification message in your sales department’s Slack channel or creating a record in your customer relationship management database. Azure Function apps A function app allows developers to run specific code at specific times without worrying about the underlying infrastructure. That’s why function apps are called serverless applications, or Code as a Service (CaaS) solutions. One company has a function app that sends a confirmation email message to a prospective customer whenever that user creates a new account on his website. Function apps support the C#, F#, and Java programming languages. Both logic apps and function apps operate on the basis of a trigger. This trigger could be a manual execution command, a time schedule, or a discrete operation that occurs inside or outside Azure. Azure App Service logical components The following image shows the components that make up App Service. An App Service web app is powered by an associated App Service plan. This plan is an abstraction layer; you control how much virtual compute you need to power your application or applications, and you dynamically scale vertically to suit your performance requirements and budgetary needs. The App Service plan is the only required component. You can extend the app’s capabilities by integrating it with any of the following: Storage account: An App Service plan has persistent storage, but many developers like to use a flexible storage account for additional space. Virtual network: You can link an App Service app to a virtual network — perhaps to connect your web app to a database running on a VM. Databases: Most web apps nowadays use relational, nonrelational, and/or in-memory databases to store temporary or persistent data. Content delivery network: You can place static website assets in a storage account and let Azure distribute the assets globally. This way, your users get a much faster experience because their browsers pull your site content from a low-latency, geographically close source. App Service plans are organized in three increasingly powerful (and expensive) tiers: Dev/Test: F- and B-series VMs with minimal compute and no extra features. This compute level is the least expensive but offers few features and shouldn’t be used for production apps. Production: S- and P-series VMs with a good balance of compute power and features. This tier should be your App Service starting point. Isolated: Called the App Service Environment and very expensive; Microsoft allocates hardware so that your web app is screened from the public Internet You can move within or between tiers as necessary. This capability is one of the greatest attributes of public cloud services. Here, you see an App Service plan. Azure uses the Azure Compute Unit as a standardized method to classify compute power. You see it referenced in Azure VMs, App Service, and any other Azure resource that uses VMs. Having a standardized performance metric is crucial, because Microsoft uses several types of hardware in its data centers. Just because you can associate more than one web app into a single App Service plan doesn’t mean that you should. Sure, you can save money (the App Service plan incurs run-time costs based on instance size), but the more apps you pack into a single plan, the greater the burden on the underlying VM. Want to learn more about Microsoft Azure? Check out these ten Azure educational resources.
View ArticleArticle / Updated 03-27-2020
If you really want to be successful with Microsoft Azure, you’ve got to keep up to date with it. To help you with that, here are ten hand-selected Azure educational resources. Happy reading! Azure documentation The Azure documentation library is your ultimate source of truth on how Azure products work. Microsoft has open-sourced the documents to GitHub, so community members (like you!) can edit them and submit your for review. The living nature of the library means that the documentation evolves with Azure products. You’ll also find plenty of multimedia resources to address educational styles that cater more to a hands-on or visual approach. If you have yet to skill up on Git source-code control and the GitHub collaboration workflow, consider checking out Sarah Guthals’ and Phil Haack’s splendid GitHub For Dummies (John Wiley & Sons, Inc.). Azure Architecture Center Microsoft has long since eaten its own dog food, so to speak, which means two things: It uses the products, technologies, and frameworks that it develops. It shares its best practices with customers. The Azure Architecture Center includes two super-valuable online guides: Azure Application Architecture Guide: How Microsoft approaches designing scalable, resilient, performant, and secure application architectures in Azure Microsoft Cloud Adoption Framework for Azure: How Microsoft helps customers transition from fully on-premises to hybrid cloud to cloud-native architectures The center also includes a reference architecture library as well as a varied collection of topology diagrams and explanations. Many of the diagrams are available for free as Microsoft Visio drawing files. If you haven’t invested in Visio or an equally good technical diagramming tool, consider doing so. Practically every Azure architect uses either Visio or Microsoft PowerPoint to create Azure architectural diagrams. Azure REST API Browser Every action you take in your Azure subscription translates into Representational State Transfer (REST) API calls to the Azure public cloud. The Azure REST API Browser enables you to deep-dive into resource providers, view available operations, and even test them. After the API Browser shows you how the Azure Resource Manager API request/response life cycle works, you can productively build custom applications that interact directly with the ARM REST API. Microsoft @ edX edX is a not-for-profit company that hosts massive open online courses (MOOCs, pronounced mooks). Microsoft bought into the edX ecosystem heavily. You’ll find a large collection of Azure-related courses that cover most disciplines, including Architecture Administration Development Security Internet of Things Data engineering Data science Machine learning These courses are free and require only that you log in with your Microsoft account to track your progress. The company’s multimedia approach is an excellent way to reach people who have different (or multiple) learning styles. Microsoft Learn Microsoft Learn is similar in ways to edX, with the exception that Learn is entirely homegrown at Microsoft. You’ll find hundreds of free, curated labs on most Azure job roles; the labs are well-written and complete. Microsoft includes a sandbox environment in many hands-on labs that gives you access to Microsoft’s Azure subscription. This means you learn Azure by working with Azure directly rather than working with a mock-up or simulation. Azure Certification If you ask five IT professionals their views on professional certification, you’ll likely get five different responses. Some folks require Azure certification to keep up their Microsoft Partner status; other people want certification to differentiate themselves in a crowded job market. Regardless, this site is the one you want to delve into for Azure role-based certification. In 2018, Microsoft moved from a monolithic Azure certification title to role-based badges. This way, you can demonstrate your Azure expertise in a way that’s closely aligned with your present or desired job role, be it administrator, developer, architect, DevOps specialist, and so forth. It’s a good idea to bookmark the Microsoft Worldwide Learning Certification and Exam Offers page. Most of the time, you can find the Exam Replay offer, as well as discounts on MeasureUp practice tests. Exam Replay gives you a voucher for a second exam if you don’t pass on your first attempt. MeasureUp MeasureUp is Microsoft’s only authorized practice exam provider. It can’t be overstated how crucial practice exams are to success on the Azure certification exam. There have even been seasoned experts walk into an exam session smiling and exit crying because they didn’t pass because they weren’t expecting the myriad ways that Microsoft Worldwide Learning tests the candidate’s skills. Oh, by the way: Microsoft Learn referes to Microsoft Worldwide Learning’s online technical education hub. Yes, they are different things. MeasureUp gives you an accurate exam experience. Just as important as understanding the theory is understanding and being comfortable with how Microsoft assesses your knowledge. On the Azure exams, you can expect to see performance-based labs in which you’re given an Active Directory (AD) login to the real Azure portal and required to complete a series of deployment and/or configuration tasks. The exams also feature case studies that present a fictional company’s current situations, Azure-related goals, and technical limitations; then you’re asked to answer several questions by analyzing each the case. These exams are no joke and deserve dedicated preparation time. Meetup When you make professional connections, it’s a natural outcome that your name will arise when opportunities make themselves manifest. To that point, you should visit Meetup.com and search for Azure user groups in your area. At this writing, there are 716 groups worldwide with more than 300,000 members. Azure meetups are great opportunities to find out new stuff, and you’ll meet neighbors who do the kind of work you’re doing or want to do in the future. IT recruiters make a habit of sponsoring user groups, so you can get plugged into job-search networks at meetups as well. You should also try Microsoft Reactor, which is a Microsoft-run community hub listing free meetups for learners and community organizers. The subject matter covers Azure and most other Microsoft technologies and job roles. CloudSkills CloudSkills is a boutique cloud consultancy that has Microsoft Azure education as its principal aim. Mike Pfeiffer is a Microsoft Most Valuable Professional (MVP) who has worked as a cloud architect for both Amazon web Services and Microsoft. He created CloudSkills As a training company that offers free and paid training, professional consulting, and general cloud and career guidance. You can find training sessions at CloudSkills.io, free podcasts interviews at CloudSkills.fm, and free tutorials at CloudSkills.tv. Pluralsight Many of the world’s Azure experts train with Pluralsight. Pluralsight and the Azure teams partnered to create an industry-leading online learning library. Many of these video training courses, covering all the major Azure job roles, are available for free. Pluralsight also offers pre- and post-skills assessments as well as mentoring services.
View ArticleArticle / Updated 03-27-2020
When it comes to Microsoft Azure, Cosmos DB is the data store solution. Relational databases are all about schema, or structure. Many say that to be a relational database administrator, you should be a micromanager, because every data row needs to be heavily constrained to fit the table, relationship, and database schema. Although relational databases offer excellent data consistency, they tend to fall down in the scalability department because of all the schema overhead. Enter the nonrelational database. Rather than call a nonrelational database schemaless, it’s more accurate to say that NoSQL databases have a flexible schema. At first, you may think that a flexible schema would make querying nearly impossible. But NoSQL databases can overcome this hurdle by partitioning the data set across multiple clustered nodes and apply raw compute power to the query execution. NoSQL is generally understood to mean not only SQL. Azure Cosmos DB features Cosmos DB (originally called Document DB) is Azure’s multimodel, georeplicated, nonrelational data store. You can implement Cosmos DB in conjunction or instead of a relational database system. Look below for a tour of Cosmos DB’s features. Cosmos DB is Multimodel Originally Document DB was a JavaScript Object Notation (JSON) document-model NoSQL database. Microsoft wanted to embrace a wider customer pool, however, so it introduced Cosmos DB, which supports five data models/application programming interfaces (APIs): Core (SQL): This API is the successor to the original Document DB. The data store consists of JSON documents, and Core API provides a SQL-like query language that should be immediately comfortable for relational database administrators and developers. Azure Cosmos DB for MongoDB API: This API supports the MongoDB wire protocol. MongoDB, also a JSON document store, allows you to query Cosmos DB as though it were a MongoDB instance. Cassandra: This API is compatible with the Cassandra wide column store database and supports Cassandra Query Language. Azure Table: This API points to the Azure storage account’s table service. It’s a key/value data store that you can access with ARM’s representational state transfer (REST) APIs. Gremlin (graph): This API supports a graph-based data view and the Apache Gremlin query language. Do you see a theme? The idea is that just about any developer who needs a NoSQL data store should be able to use Cosmos DB without sacrificing original source code or client-side tooling. Azure Cosmos DB has turnkey global distribution With a couple of mouse clicks, you can instruct Azure to replicate your Cosmos DB database to however many Azure regions you need to put your data close to your users. Cosmos DB uses a multimaster replication scheme with a 99.999 percent availability service level agreement for both read and write operations. Azure Cosmos DB has multiple consistency levels Relational databases always offer strong consistency at the expense of speed and scale. Cosmos DB offers flexibility in this regard, allowing you to select (dynamically) any of five data consistency levels: Strong: Reads are guaranteed to return the most recently committed version of an item. This level is the slowest-performing but most accurate. Bounded Staleness, Session, and Consistent Prefix: These consistency levels offer balance between performance and consistent query results. Eventual: Reads have no ordering guarantee. This choice is the fastest but least accurate. Data consistency refers to the requirement that any database transaction change affected data only in allowed ways. With regard to read consistency specifically, the goal is to prevent two users from seeing different results from the same query due to incomplete database replication. How to Create a Cosmos DB account Now, you get down to business. The first task is getting Cosmos DB off the ground to create a Cosmos DB account. After you have the account, you can define one or more databases. Follow these steps to create a Cosmos DB account: In the Azure portal, browse to the Azure Cosmos DB blade, and click Add. The Create Azure Cosmos DB Account blade appears. On the Create Azure Cosmos DB Account blade, complete the Basics page, using the following settings: Account Name: This name needs to be unique in its resource group. API: Select Core (SQL). Geo-Redundancy: Don’t enable this option. If you do, Azure replicates your account to your region’s designated pair. Multi-Region Writes: Don’t enable this option. You can always enable it later if you need multiple read/write replicas of the account throughout the world. Review the Network page. Click Review + Create and then click Create to submit the deployment. Running and debugging a sample Cosmos DB application One of the best ways to gain general familiarity with a new Azure service is to visit its Quick Start page. Let’s do this now with Cosmos DB. The following image shows Cosmo DB Quick Start page. If you follow the Cosmos DB Quick Start tutorial, you can accomplish the following goals: Create a new Cosmos DB database and container. Download a .NET Core web application that connects to the Cosmos DB database. Interact with the container and data store on your workstation by using Visual Studio. To run a sample application, follow these steps: On your Cosmos DB account’s Quick Start blade, choose the .NET Core platform, and click Create ‘Items’ Container. Azure creates a container named ‘Items’ with 10 GB capacity and 400 request units (RUs) per second. As you may rightly guess, the RU is the standardized Cosmos DB performance metric. Click Download to download the preconfigured .NET Core web application. Unzip the downloaded archive on your system, and double-click the quickstartcore.sln solution file to open the project in Visual Studio 2019 Community Edition. In Visual Studio, build and run the solution by choosing Build→Build Solution. Choose Debug→Start Debugging to open the application in your default web browser. In the To-Do App with Azure DocumentDB web application that is now running in your web browser, click Create New, and define several sample to-do items. When you add to-do items, you’re populating the Items container in your Azure Cosmos DB database. Close the browser to stop debugging the application. For extra practice, use the Visual Studio publishing wizard (which you find by right-clicking your project in Solution Explorer and choosing Publish from the shortcut menu) to publish this app to Azure App Service. This exercise is a great test of the skills you’ve developed thus far. Interacting with Cosmos DB To practice interacting with Cosmos DB and your database, return to the Azure portal and look at your Cosmos DB account settings. Here, some key settings are highlighted to make sure you know where to find them: Data Explorer: Perform browser-based query and database configuration. The following image shows Data Explorer. Replicate Data Globally: Click the map to replicate your Cosmos DB account to multiple regions. (Additional costs apply.) Default Consistency: Switch among the five data consistency levels. Firewall and Virtual Networks: Bind your Cosmos DB account to specific virtual network(s). Keys: View the Cosmos DB account endpoint Uniform Resource Identifier, access keys, and connection strings. Now follow these steps to interact with your new Cosmos DB database directly from the Azure portal: In your Cosmos DB account, select Data Explorer. Click the Open Full Screen button on the Data Explorer toolbar. This button takes you to Azure's Cosmo DB page in a separate browser tab, giving you more screen real estate to run queries. The ToDoList planet icon represents the icon, and the Items icon represents the container. The container is the Cosmos DB replication unit. Right-click the Items container and choose New SQL Query from the shortcut menu. The Save Query button is handy for retaining particularly useful queries. View your existing items by running the default query. The default query provided by Azure in the Data Explorer, SELECT * FROM c, retrieves all documents from your Items container. c is an alias for container. You’ll find the alias to be super useful when you work with the Cosmos DB Core API. Experiment with different SELECT queries. The double dash (--) denotes a one-line comment in Data Explorer. Try entering these queries in Data Explorer. -- select two fields from the container documents SELECT c.name, c.description from c -- show only incomplete todo items SELECT * from c WHERE c.isComplete = false Use Data Explorer to update a document. In the SQL API tree view on the left side of the image above, expand the Items container, and select Items. Click the document you want to edit, and update the isComplete field on at least one document from “false” to “true,” or vice-versa. Click the Update toolbar button to save your changes. Bookmark the Microsoft Azure DB page that you can easily access Cosmos DB Data Explorer in a full-window experience. Also, Microsoft offers excellent Cosmos DB SQL query cheat sheets.
View ArticleArticle / Updated 03-26-2020
Microsoft Azure includes Logic Apps to help you build in workflows. This allows you to automate certain processes rather than manually manage. Suppose that your company’s marketing department wants to be notified any time a current or prospective customer mentions your corporate handle on Twitter. Without Logic Apps, your developers would have to register for a developer Twitter account and API key, after which they’d need to immerse themselves in the Twitter API to understand how to integrate Twitter with your corporate notification platform. Instead, your developers can configure a Logic App to trigger on particular Twitter tweets and send an email message to an Outlook mailbox without knowing Twitter or Office 365 APIs. Creating an Azure Logic App You’re going to create a Logic App that triggers on mentions of the keyword Azure in public tweets. The resulting action sends email notifications to a mailto:[email protected] designated email address. You’ll develop your Logic App in three phases: Deploying the Logic App resource Defining the workflow Testing the trigger and action Deploying the resource in the Azure portal Follow this procedure to create the Logic App resource: In the Azure portal, browse to the Logic Apps blade, and click Add. The Create blade opens. Complete the Logic App Create blade. There’s not much to creating the resource. Provide the name, subscription, resource group, and location, and specify whether you want to monitor with Log Analytics. Click Create to submit the deployment. When deployment is complete, click Go to Resource in the Azure notification menu to open the Logic App. You’ll be taken to the Logic Apps Designer by default. Click the X button in the top-right corner of the interface to close the blade. Defining the Azure Logic App workflow Follow these steps to define the Logic App workflow: If you want to follow along with this exercise, you need Twitter and Office 365 accounts. Twitter accounts are free, but Office 365 typically is a paid SaaS product. If you like, use another email account (such as Gmail). The Logic Apps connector library is so vast, chances are good that you’ll find a service that works for you. Go to your new Logic App’s Overview blade. Choose Logic App Designer from the Settings menu. This command takes you to the view you saw the first time you opened the Logic App. Scroll to the Templates section, and select Blank Logic App. Starting with a blank Logic App enables you to become more familiar with the workflow design process, but Azure provides lots of templates and triggers that you can use for other purposes. In the Search Connectors and Triggers field, type Twitter. In the search results, select the Twitter trigger category and then click the When a New Tweet Is Posted trigger. Click Sign In, and log in to your Twitter account. Complete the When a New Tweet Is Posted form. Maybe you want the Logic App to trigger whenever someone mentions the keyword Azure in a tweet, so you complete the options this way: Search Text: “Azure” Interval: 1 Frequency: Minute Configuring a Logic App to trigger on the keyword Azure generates lot of trigger events and creates a potentially bad signal-to-noise ratio in your results. If you’re creating a Logic App for this purpose, experiment with writing more-granular trigger keywords to produce the results you desire. Click New Step. Scroll through the connector library list and look for Outlook. When you find the Outlook connectors, click the Office 365 Outlook connector. Search the actions list for Send, and select the Send an Email action. In the Office 365 Outlook connector dialog box, click Sign In, and authenticate to Office 365. Complete the Send an Email dialog box, preferably using dynamic fields. This step is where the procedure can get a bit messy. Put your destination Office 365 email address in the T: field. Place your cursor in the Subject field. Click Add Dynamic Content to expose the dynamic content pop-up window. Dynamic fields allow you to plug live data into your workflow. Click Save. Don’t overlook this step! It’s easy to miss. Testing the Logic App trigger and action To trigger the action, follow these steps: On the Logic App workflow toolbar, click Run. You must— absolutely must — switch the Logic App to a running state for it to catch the trigger. On Twitter, and post a tweet that includes a reference to Azure. Await notification. The tweet should trigger an email to the address you designated. Another place to check workflow run status is the Overview page, which shows how many times the trigger was tripped, the workflow run, and/or any errors.
View ArticleArticle / Updated 03-26-2020
The first step in designing an Azure Function is deciding what you want it to do. Suppose that you created a web application in Azure App Service that allows users to upload image files. Your fictional app’s source code places the user’s uploaded image files in a container named (appropriately enough) images, inside an Azure storage account blob (binary large objects) service container. What if you want to take automatic action on those uploads? Here are some examples: Automatically converting and/or resizing the image Performing facial recognition on the image Generating notifications based on image type As it happens, it’s possible to trigger an Azure Function based on a file upload. Currently, Azure Functions includes the following trigger, among others: HTTP: Triggers based on a web hook (HTTP request) Timer: Triggers based on a predefined schedule Azure Queue Storage: Triggers based on a Queue Storage message Azure Service Bus Topic: Triggers based on a message appearing in a designated topic Azure Blob Storage: Triggers whenever a blob is added to a specified container (the trigger you need for the example) Durable Functions HTTP Starter: Triggers when another Durable Function calls the current one Creating an Azure Function A Function App is a container object that stores one or more individual functions. The image below shows the Function workflow. First, you need to create the Function App. Next, you define the Azure Function itself. Finally, you test and verify that your function is triggered properly. Creating the Function App Follow these steps to deploy a new Function App in the Azure portal: On the Function App blade, click Add. The Basics tab of the Function App blade appears. Complete the Function App Deployment form. Here are some suggested values: App Name: This name needs to be unique because Function Apps are part of the App Services family and have DNS names within Microsoft’s public azurewebsites.net zone. OS: Choose the type of operating system you have. Hosting Plan: Consumption Plan is a good place to start. You can change this setting later if you want or need to do so. Runtime Stack: Your choices include .NET Core, Node.js, Python, Java, and PowerShell Core. Storage: A general-purpose storage account is necessary to store your code artifacts. Application Insights: It’s a good idea to enable Application Insights to gain as much back-end telemetry as you can from your Function App. Click Create to submit the deployment. Currently, Microsoft is previewing a new creation experience for Azure Function Apps. Therefore, don’t be surprised if you see the new experience instead of what is shown above. The configuration options are the same, but they’re presented differently. Change in the Azure portal is one thing in life you can always depend on. Defining the Azure Function The next step in the Azure Function creation workflow is defining the Function itself. Not too many people find the Function App’s nonstandard user interface to be intuitive. At all. The following explains what’s going on. A single Function App (A) contains one or more Functions. Each Function that runs in the Function App’s run space is stored here (B). A Function Proxy (C) allows you to present a single API endpoint for all Functions in a single Function app to other Azure or non-Azure API Function Apps support deployment slots (D); they serve the same purpose as deployment slots for App Service web apps, and they support slot swapping. Function App Settings (E) is where you specify settings for a Function App, Function, or Proxy, depending on the context. Before you create the function, you should create the images blob container you’ll need for this website-upload example. If you have Azure Storage Explorer installed, proceed! Azure Storage Explorer is a free, cross-platform desktop application that makes working with Azure storage accounts a snap. Follow these steps to create a blob container for the new Azure Function: Open Azure Storage Explorer, and authenticate to your Azure subscription. Expand the storage account you created for your Function App, right-click Blob Containers, and choose Create Blob Container from the shortcut menu. Azure creates the container and places your cursor next to it, all ready for you to name the new object. Name the container images, and press Enter to confirm. Creating the Azure Function Excellent. You’re almost finished. Now you need to create a function. Follow these steps: Open your Function App in the Azure portal. In the Functions row, click the plus sign. The details pane on the right leads you through a Function wizard, the first step of which is choosing a development environment. Choose In-Portal as your development environment, and click Continue. Both Visual Studio and Visual Studio Code have native support for writing Functions. In the Create a Function pane, More Templates, and click Finish and view templates. Click View Templates to view the Azure Function trigger gallery. In the gallery, select the Azure Blob Storage trigger. You’ll probably be prompted to install the Azure Storage extension; click Install, and let the process complete. Complete the New Function blade. Complete the following fields: Name: This is the name of the Function that will be contained inside the Function App. The name should be short and descriptive. Azure Blob Storage Trigger Path: This setting is important. You want to leave the {name} bit alone because it’s a variable that represents any uploaded blob. The path should look like this: images/{name} Storage Account Connection: Click New, and select your Function’s storage account. Click Create when you’re finished. When you select your function on the Function App’s left menu, you can see your starter C# source code. For this example, the code should look like this: public static void Run(Stream myBlob, string name, ILogger log) { log.LogInformation($”C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes”); } This code says “When the function is triggered, write the name and size of the blob to the console log.” This example is simple so that you can focus on how Functions work instead of getting bogged down in programming-language semantics. The new Function contains the following three settings blades: Integrate: This blade is where you can edit your trigger, inputs, and outputs. Manage: This blade is where you can disable, enable, or delete the Function, as well as define host keys that authorize API access to the Function. Monitor: Here, you can view successful and failed Function executions and optionally access Application Insights telemetry from the Function. Testing your Azure Function All you have to do to test your function is follow these steps: In Azure Storage Explorer, upload an image file to the images container. To do this, select the container and then click Upload from the Storage Explorer toolbar. You can upload individual files or entire folders. Technically, you can upload any file, image or otherwise, because all files are considered to be blobs. In the Azure portal, select your Function App, switch to the Logs view, and watch the output. Configuring Azure Function App settings You may want to switch from Consumption to App Service plan pricing, for example, or vice versa. Or maybe you want to test a different run-time environment. Navigating the Function App configuration blades can be difficult, mainly because they’re completely different from the rest of the Azure portal UI. I also think there’s some redundancy in the places where some settings controls are located. To make things easier, here’s a summary of the major capabilities of each settings blade in the Function App: You need to be at the Function App scope (above Functions in the navigation UI). You can see this convention below. Overview: Use this blade to stop, start, and restart the Function App. You also can download all site content for use with a local integrated development environment. Platform Features: This blade is where most Function App settings categories reside. Function App Settings: From this blade, you can view usage quota and manage host keys. Configuration, Application Settings: This blade is where you manage application key/value pair settings and database connection strings. Configuration, General Settings: This blade is where you can configure Architecture; FTP state, HTTP version; Remote debugging.
View ArticleArticle / Updated 03-25-2020
AKS (Azure Kubernetes Service) began life as Azure Container Service (ACS), which supported multiple container orchestration platforms, including Kubernetes, Swarm, and DC/OS. The downsides of ACS were its complexity and the fact that most customers wanted first-class support for Kubernetes only. Therefore, although you may see an occasional reference to ACS in the Azure portal or elsewhere, let’s just ignore them and focus exclusively on AKS. Here, you learn a bit about the AKS architecture, some of its benefits, and a bird’s-eye perspective on using AKS in Azure. Developers don’t necessarily start containers because they’re fun to use; developers start containers because they’re practical. Containers host application components such as web servers or database servers and then form application solutions. Therefore, try to relate the words container and application from now on. Azure Kubernetes Service architecture The image below shows the basic elements of AKS: Master node: Microsoft abstracts the control plane (called the master node in Kubernetes nomenclature), so you can focus on your worker nodes and pods. This hosted Platform as a Service (PaaS) platform is one reason why many businesses love AKS. The master node is responsible for scheduling all the communications between Kubernetes and your underlying cluster. Worker node: In AKS, the worker nodes are the VMs that make up your cluster. The cluster gives you lots of parallel computing, the ability to move pods between nodes easily, to perform rolling updates of nodes without taking down the entire cluster, and so on. One option is using ACI to serve as worker nodes. The below image also shows ACR, from which AKS can pull stored images. Isn’t all this Azure integration compelling? Pod: The pod is the smallest deployable unit in the AKS ecosystem. A pod may contain one Docker container, or it might contain a bunch of containers that you need to stay together, communicate with one another, and behave as a cohesive unit. Azure Kubernetes Service administration notes Now, let’s take a look at how developers and administrators interact with AKS. From a control-plane perspective, you have AZR, with which you can protect your AKS cluster with role-based access control, upgrade your Kubernetes version, scale out the cluster, add or remove worker nodes, and so on. From the application-plane perspective, Microsoft wanted to ensure that customers don’t have to learn a new tool set to work with containers in AKS. kubectl command-line tool Most Kubernetes professionals use the kubectl (generally pronounced KOOB-see-tee-el, KOOB-control, or KOOB-cuttle) to interact with their Kubernetes cluster and its pods programmatically. If you have Azure CLI installed on your workstation, you can install kubectl easily by issuing the following command: az aks install-cli In fact, Azure CLI seems to borrow quite a bit from kubectl syntax in terms of the app context command workflow. To list your running pods (containers) with kubectl, for example, run $ kubectl get pods READY STATUS RESTARTS AGE azure-database-3406967446-nmpcf 1/1 Running 0 25m azure-web-3309479140-3dfh0 1/1 Running 0 13m Kubernetes web UI The Kubernetes web UI is a graphical dashboard that gives administrators and developers a robust control surface. This image shows the interface. Once again, you should use Azure CLI to connect to the dashboard; doing so isn’t possible from the Azure portal. Here’s the relevant command: az aks browse --resource-group myResourceGroup --name myAKSCluster The az aks browse command creates a proxy between your workstation and the AKS cluster running in Azure; it provides the connection URL in its output. The typical connection URL is http://127.0.0.1:8001.
View ArticleArticle / Updated 03-25-2020
Microsoft Azure is a set of cloud services to help organizations meet business challenges. The services include tools for managing and deploying applications. Microsoft offers PaaS (platform-as-a-service), IaaS infrastructure-as-a-service), and SaaS (software-as-a-service) platforms to help businesses manage their technology demands. Azure deployment models In Azure nomenclature, deployment refers to your provisioning resources in the Azure public cloud. You may be saying, “What’s this? Why is Microsoft Azure called a public cloud? Microsoft always says that different Azure customers can never see each other’s resources by default.” Hang on; hang on. Let’s explain. Public cloud Microsoft Azure is a public cloud because its global data center fabric is accessible by the general public. Microsoft takes Azure’s multitenant nature very seriously; therefore, it adds layer after layer of physical and logical security to ensure that each customer’s data is private. In fact, in many cases, even Microsoft doesn’t have access to customers’ data encryption keys! Other major cloud service providers — including AWS, GCP, Oracle, and IBM (see the nearby sidebar “Other cloud providers”) — are also considered to be public cloud platforms. Microsoft has three additional, separate Azure clouds for exclusive governmental use. Thus, the Microsoft literature contains references to Azure Cloud, which refers to its public cloud, and to Azure Government Cloud, which refers to its sovereign, special-access clouds. No member of the general public can access an Azure Government Cloud without being associated with a government body that employs it. Private cloud Very, very few businesses have enough financial, capital, and human resources to host their own cloud environments. Typically only the largest enterprise organizations can afford having their own private cloud infrastructure with redundant data centers, storage, networking, and compute, but they may have security prohibitions against storing data in Microsoft’s (or any other cloud provider’s) physical data centers. Microsoft sells a portable version of the Azure cloud: Azure Stack, which consists of a server rack that a company leases or purchases from a Microsoft-affiliated hardware or service provider. The idea is that you can bring the hallmarks of cloud computing — on-demand self-service, resource pooling, elasticity, and so forth — to your local environment without involving either the Internet or an external cloud provider unless you want to. Your administrators and developers use the same Azure Resource Manager (ARM) application programming interface (API) to deploy resources locally to Azure Stack as they use to deploy to the Azure public cloud. This API makes it a snap to bring cloud-based services on premises, and vice versa. Hybrid cloud When you combine the best of on-premises and cloud environments, you have a hybrid cloud. It's most often the case that the hybrid cloud deployment model makes the most sense for most businesses. Why? A hybrid cloud allows the business to salvage (read: continue to use) the on-premises infrastructure that it’s already paid for while leveraging the hyper scale of the Azure public cloud. Take a look at the image below. In this topology, the on-premises network is extended to a virtual network running in Azure. You can do all sorts of nifty service management here, including Joining the Azure virtual machines (VMs) to your local Active Directory domain Managing your on-premises servers by using Azure management tools Providing nearly instant failover disaster recovery (DR) by using Azure as a DR site. Failover refers to having a replicated backup of your production servers available somewhere else so that you can shift from your failed primary environment to your backup environment within minutes. Failover is critical for businesses that cannot afford the downtime involved in restoring backups from a backup archive. Here’s an overview of what’s going on: On the left side is a local business network that connects to the Internet via a virtual private network (VPN) gateway. On the right (Azure) side is a three-VM deployment in a virtual network. A site-to-site VPN connects the local environment to the virtual network. Finally, an Azure load balancer spreads incoming traffic equally among the three identically configured web servers in the web tier subnet. As a result, the company’s internal staff can access the Azure-based web application over a secure VPN tunnel and get a low-latency, reliable, always-on connection to boot. A local, physical network environment is referred to as an on-premises environment. In the wild, you’ll see stray references to “on premise”— sadly, even in Microsoft’s Azure documentation. Don’t make this mistake. A premise is an idea; premises refers to a location. Usually, it’s only small businesses that are agile enough to do all their work in the Azure cloud. That said, you may find that after your organization gets its sea legs with Azure and begins to appreciate its availability, performance, scalability, and security possibilities, you’ll be working to migrate more on-premises infrastructure into Azure, and you’ll be targeting more of your line-of-business (LOB) applications to the cloud first. Azure service delivery models Organizations deploy applications in three primary ways: Software as a Service, Infrastructure as a Service, and Platform as a Service. Software as a Service (SaaS) An SaaS application is a finished, customer-facing application that runs in the cloud. Microsoft Office 365 is a perfect example. As shown below, you can use Word Online to create, edit, and share documents with only a web browser; an Internet connection; and an Office 365 subscription, which you pay for each month on a subscription basis. With SaaS applications, you have zero visibility into the back-end mechanics of the application. In the case of Word Online, you neither know nor care how often the back-end servers are backed up, where the Office 365 data centers are geographically located, and so forth. All you care about is whether you can get to your cloud-hosted documents and whether Word Online behaves as you expect. Platform as a Service (PaaS) Consider a business that runs a three-tier on-premises web application with VMs. The organization wants to move this application workload to Azure to take advantage of the benefits of cloud computing. Because the organization has always done business by using VMs, it assumes that the workload must by definition run in VMs in Azure. Not so fast. Suppose that the workload consisted of a Microsoft-stack application. Maybe the business should consider using PaaS products such as Azure App Service and Azure SQL Database to leverage autoscale and pushbutton georeplication. Georeplication means placing synchronized copies of your service in other geographic regions for fault tolerance and placing those services closer to your users. Or maybe the workload is an open-source project that uses PHP and MySQL. No problem. Azure App Service can handle that scenario. Microsoft also has a native hosted database platform for MySQL called (appropriately enough) Azure Database for MySQL. With PaaS, Microsoft takes much more responsibility for the hosting environment. You’re not 100 percent responsible for your VMs because PaaS products abstract all that plumbing and administrative overhead away from you. The idea is that PaaS products free you to focus on your applications and, ultimately, on the people who use those applications. If PaaS has a trade-off, it’s that relinquishing full-stack control is an adjustment for many old-salt systems and network administrators. To sum up the major distinction between IaaS and PaaS, IaaS gives you full control of the environment, but you sacrifice scalability and agility. PaaS gives you full scalability and agility, but you sacrifice some control. To be sure, the cloud computing literature contains references to other cloud deployment models, such as community cloud. You’ll also see references to additional delivery models, such as Storage as a Service (STaaS) and Identity as a Service (IDaaS). Infrastructure as a Service (IaaS) Most businesses that migrate their applications and services to Azure use the IaaS model, if only because they’ve delivered their services via VMs in the past — the old “If it ain’t broke, don’t fix it” approach. In large part, IaaS is where the customer hosts one or more VMs in a cloud. The customers remain responsible for the full life cycle of the VM, including Configuration Data protection Performance tuning Security By hosting your VMs in Azure rather than in your on-premises environment, you save money because you don’t have to provision the physical and logical resources locally. You also don’t have to pay for the layers of geographic, physical, and logical redundancy included in Azure out of the box. Thus, whereas SaaS is a service that’s been fully abstracted in the cloud, and the customer simply uses the application, IaaS offers a split between Microsoft’s responsibility (providing the hosting platform) and the customer’s responsibility (maintaining the VMs over their life cycle). Cloud computing in general, and Microsoft Azure in particular, use what’s called the shared responsibility model. In this model, Microsoft’s responsibility is providing the tools you need to make your cloud deployments successful — Microsoft’s data centers, the server, storage and networking hardware, and so on. Your responsibility is to use those tools to secure, optimize, and protect your deployments. Microsoft isn’t going to configure, back up, and secure your VMs automatically; those tasks are your responsibility. Microsoft Azure Services The Microsoft Azure service catalog has hundreds of services and is continually expanding. Microsoft maintains a services directory. You can review all services there, but a brief description is provided below. Azure history In October 2008, Microsoft announced Windows Azure at its Professional Developers Conference. Many people feel that this product was a direct answer to Amazon, which had already begun unveiling AWS to the general public. The first Azure-hosted service was SQL Azure Relational Database, announced in March 2009. Then came support for PaaS websites and IaaS virtual machines in June 2012. The following image shows what the Windows Azure portal looked like during that time. Satya Nadella became Microsoft’s chief operating officer in February 2014. Satya had a vision of Microsoft expanding its formerly proprietary borders, so Windows Azure became Microsoft Azure, and the Azure platform began to embrace open-source technologies and companies that Microsoft formerly considered to be hostile competitors. It can’t be overstated how important that simple name change was and is. Today, Microsoft Azure provides first-class support for Linux-based VMs and non-Microsoft web applications and services, which is a huge deal. Finally, Microsoft introduced the (RM deployment model at Microsoft Build 2014. The API behind Windows Azure was called Azure Service Management (ASM), and it suffered from several design and architectural pain points. ASM made it super-difficult to organize deployment resources, for example, and it was impossible to scope administrative access granularly.
View ArticleArticle / Updated 09-05-2019
Wireless networking is a topic you are sure to be tested on when taking the A+ Exam. You are responsible for knowing the wireless standards and the common security steps you should take to help secure a wireless network. Wireless standards Standard Description 802.11a Runs at the 5 GHz frequency range and has a speed of 54 Mbps. 802.11b Runs at the 2.4 GHz frequency range and has a speed of 11 Mbps. 802.11g Runs at the 2.4 GHz frequency range and has a speed of 54 Mbps. 802.11n Runs at the 2.4 GHz or 5 GHz frequency range and has a practical speed of approximately 150 Mbps. 802.11ac Runs at the 2.4 GHz or 5 GHz frequency range and has a speed of potentially 866 Mbps or more. Wireless security Feature Description MAC Filtering You can control which clients can connect to the wireless network by their MAC address. To do this, simply enable MAC address filtering and then add the MAC address of any authorized devices. Change SSID Change the SSID of your wireless network so that it is not so obvious. Clients will need to know the SSID in order to connect. Disable SSID Broadcasting After changing the SSID, disable SSID broadcasting, which allows you to hide your wireless network from clients. Enable WPA2 Enable WPA2 as the encryption protocol as it is the more secure protocol of WPA2, WPA, and finally WEP. Ensure that you use a complex key (mix of letters, numbers, and case) when setting the key. Set Admin Password Be sure to set an administrator password for your device so others cannot connect to it and change the settings.
View Article