Quantcast
Channel: Scripting Blog
Viewing all 117 articles
Browse latest View live

Hey, Scripting Guy! update

$
0
0

Ed Wilson retired, but we are continuing his legacy of friendly tutorials on solving common problems. We will also help IT professionals expand their skill set beyond what might be their comfort zone. Examples of this include incorporating continuous integration/continuous deployment (CI/CD) tools into core infrastructure management, and hybrid cloud operations.

Ed actively recruited contributors to submit new content.  We are continuing that tradition by making "Hey, Scripting Guy!" a community effort. In the spirit of getting people started in the wide world of scripting for the cloud, we are asking that proposals target new areas of scripting innovation, such as the set of topics listed below.

If you would like to write a post for "Hey, Scripting Guy!", please email scriptingguys@microsoft.com a proposal for a post dealing with:

  • Running scripts in the hybrid cloud.
  • Using scripts to manage the hybrid cloud.
  • Azure PowerShell: cmdlets and scripts.
  • PowerShell open source projects (sharing with the community) (examples: Core, Pester, VSCode, Gallery).
  • Best practices to run PowerShell on Linux systems.
  • Life making use of PowerShell (examples: Chef, Puppet).

Be sure to include how you want to be credited in the post, including your name and title, and even a picture of yourself if you would like. We will be happy to work with you to get your work up on "Hey, Scripting Guy!"

Thank you, and we're looking forward to working with the PowerShell community on creating new content for the blog!

 


Weekend Scripter: Exchange add-in module

$
0
0

Summary: Mike O'Neill, Microsoft Senior Premier Field Engineer, created an Exchange add-in module.

This is a PowerShell module for Exchange engineers. The module takes into account both on-premises and hybrid deployed Exchange environments.

It is a combination of several other scripts that are either on the internet, or are action items you might need help with in your day-to-day tasks. By combining several lines of code into a single verb-noun cmdlet, it makes repeatable, mundane tasks much easier. In fact, once you have the hang of them, you could delegate them out to other team members with a few minutes of training. Then you would have more time to spend on other aspects of your job, like creating more functions for even further delegation, or combining tasks into a completely automated process.

Current cmdlets in module

Connect-ExchangeServer. Connects to an on-premises Exchange server. You should not log onto servers, but instead should use remote tools or remote connections. This cmdlet allows for quick and easy connections. There is a parameter available to target a specific server, and logic for a prefix parameter if you want to log onto multiple servers simultaneously.

Connect-ExchangeOnline. Connects to an Exchange Online, Office 365 tenant. No need to run several lines of code; just type in this verb-noun cmdlet to log onto an Exchange Online tenant. Included in this cmdlet is the parameter prefix, if you choose to use one. This allows multiple connections in the same PowerShell window.

Disconnect-ExchangeOnline. Disconnects an Exchange Online session.

Get-DatabaseInformation. Obtains database availability group (DAG) database status. This is handy for on-premises Exchange DAG environments. Quickly shows you status of database copies, which ones are healthy or not, and where the active DB is currently hosted.

Start-DAGMaintenanceMode. Puts a DAG node into DAG maintenance mode. With the Exchange product group's stance of N-1 CU's when in a hybrid configuration, you need to upgrade Exchange servers often (every 92 days!). This cmdlet puts a server into maintenance mode to be able to patch and/or upgrade your servers cleanly.

Stop-DAGMaintenanceMode. Sets a DAG node out of DAG maintenance mode. Once you are done with the upgrade/patching process, this cmdlet allows you to take a server out of maintenance mode.

Get-DotNETVersion. Acquires the current .NET version on a server (can be used for any computer, not just an Exchange server). This .NET issue occurs often when you upgrade the operating system (versus an application). This cmdlet quickly tells you which version on a machine is currently installed.

Request-CredentialExchangeOnline. Allows you to reset your credentials, in case you "fat finger" them into the prompt.

Request-CredentialExchangeOnPremises. Allows you to reset your onsite credentials to sign into a server if you entered the credentials incorrectly.

Where to find this module

You have two options:

  • If you are running PowerShell 3 or later, you can simply open PowerShell and type Install-Module Exchange_AddIn.  It installs it for you from the PowerShell Gallery.
  • You can easily install the module manually. Download the zip file from the Microsoft Script Center. After you unzip the file, just copy/paste the Exchange_AddIn folder into one of the following locations:
    • %Windir%\System32\WindowsPowerShell\v1.0\Modules
    • %UserProfile%\Documents\WindowsPowerShell\Modules
    • %ProgramFiles%\WindowsPowerShell\Modules

 Conclusion

I'm already working on other functions and cmdlets to add in the next version, so stay tuned. I hope you find this Exchange Add-In module helpful. It should assist you in making your day-to-day Exchange tasks within PowerShell quicker and easier. Thank you.

Mike O'Neill

Microsoft Senior PFE, Exchange

PowerTip: Remove calendar events from a mailbox

$
0
0

Summary: Remove calendar events within an Exchange Online mailbox.

Hey, Scripting Guy! Question Is there an easy way to remove calendar events from within an Exchange Online mailbox?

Hey, Scripting Guy! AnswerYes there is. There is a newly deployed cmdlet to Office 365 tenants: Remove-CalendarEvents. This cmdlet is currently only available in Exchange Online, but it does allow administrators to remove calendar events, with an attendee, from an owner's mailbox. Here are three examples:

Remove-CalendarEvents -Identity chris@contoso.com -CancelOrganizedMeetings

This example cancels every meeting in the mailbox chris@contoso.com that occurs on or after today's date.

Remove-CalendarEvents -Identity "Angela Gruber" -CancelOrganizedMeetings -QueryStartDate 11-1-2018 -QueryWindowInDays 120

This example cancels the meetings in Angela Gruber's calendar for the specified date range. Angela is taking a temporary leave of absence from the company, so cancelling these meetings removes them from the user and resource calendars during her absence.

Remove-CalendarEvents -Identity "Jacob Berger" -CancelOrganizedMeetings -QueryStartDate 9-1-2018 -QueryWindowInDays 90 -PreviewOnly -Verbose

This example previews the meetings that would be cancelled in Jacob Berger's calendar for the specified date range. No changes are made to the mailbox.

The Doctor

Use Docker to automate testing of PowerShell Core scripts

$
0
0

 

Summary: Learn the basics about Docker, and see how to use it for PowerShell Core script testing on different operating systems.

I'm Dan Ward, a Boston-based .NET software engineer who is just plum crazy about PowerShell and automation. Earlier this year, I uploaded my first GitHub project—a PowerShell whitespace cleaner that also replaces aliases with commands and fixes casing issues. Like any manager of a new project, I was excited to see people using it and providing feedback and enhancement requests. But I became concerned when bug reports started coming in: "But I tested this on my machine before uploading, like, a thousand times!" (How often have you said that?)

PowerShell Core

Turns out I was testing in Windows PowerShell on a Windows machine, and the users having issues were using PowerShell Core that was running natively on non-Windows machines. Yes, the next generation of PowerShell is open source, and runs on Windows, Macs, and many Linux distributions. If you haven't read up on PowerShell Core yet, you should—or even better, download it and try it out now.

For most scripting activities, Windows PowerShell and PowerShell Core are very similar—but there are still some differences. And because PowerShell Core now runs across different operating systems, you need to start testing across these operating systems if you want to find any unexpected incompatibilities before your clients do.

But, like you, I really don't want to buy a lot of new hardware to test my work on different operating systems. And setting up VMs for all these operating systems is time consuming.

Docker to the Rescue!

Docker is an awesome containerization technology, and if somehow you haven't read about it you need to right now. One of the many benefits of using containers is that you can easily package up software to share with others. And the PowerShell Core team is doing just that: whenever they put out a new version, they provide packages for native installation and Docker images with PowerShell Core for popular operating systems. Want to test the latest version of PowerShell Core on Ubuntu? Just pull that image and start it up. In fact, let's do that right now.

Docker walkthrough with PowerShell Core images

Note: this walkthrough uses Docker for Windows on a Windows 10 Pro machine. If you have an older Windows operating system, you would need to use Docker Toolbox, which may not work out of the box in PowerShell.

If you want to follow along on your own machine, install Docker for Windows before continuing, and download the PowerShell whitespace cleaner.

First, in a browser, let's check out the Microsoft PowerShell Core Docker image repository.

Screenshot of image repository

There are several things to note here:

  • This page is served from Docker Hub. Docker Hub is a public registry; a place where anyone can create a repository and can upload or download container images.
  • This Docker Hub repository name is: microsoft/powershell. Repository names are [team or person name]/[project name].
  • A tag is a label that helps identify a unique image. In the PowerShell Core repository, you can see they are using tags to identify not just different operating systems, but versions of the operating systems.
  • When working with images, you identify them with [repository name]:[tag name]. However, :[tag name] is optional. If tag name is not specified, Docker uses the latest tag. In looking at the microsoft/powershell repository, latest looks like a reference to ubuntu16.04 (same size).

Great! So how do I know what images I have installed locally, and how do I pull an image to my machine?

Listing and pulling images

To list images on your local machine, use the docker images command.

PS C:\> docker images

REPOSITORY     TAG     IMAGE ID     CREATED     SIZE

PS C:\> # no images

Let's pull down a PowerShell Core image for a popular Linux distribution named Ubuntu; we'll use the 16.04 version.


PS C:\> docker pull microsoft/powershell:ubuntu16.04
ubuntu16.04: Pulling from microsoft/powershell
d5c6f90da05d: Pull complete
bbbe761fcb56: Pull complete
7afa5ede606f: Pull complete
f6b7253b56f4: Pull complete
2b8db33536d4: Pull complete
34ce21a172dd: Pull complete
db4e21036b1a: Pull complete
a553072824c6: Pull complete
cc09983fea92: Pull complete
3464f05d5785: Pull complete
Digest: sha256:c91708f5ba6f3b55da158eac0380f3c2bb907039d547b7ae52564dd7a0a3af8a
Status: Downloaded newer image for microsoft/powershell:ubuntu16.04
PS C:\> # let’s make sure we see that image
PS C:\> docker images
REPOSITORY           TAG         IMAGE ID     CREATED      SIZE
microsoft/powershell ubuntu16.04 bbbb17a9d348 11 hours ago 381MB
PS C:\>

As you can see, the first run of docker images had no results. After pulling microsoft/powershell:ubuntu16.04, we now have one image on the machine.

Creating, starting, and connecting to a container

Now that we have the image, we can create a container, and start it up in one step, with the docker run command.


PS C:\> # this is a normal Windows PowerShell
PS C:\> docker run -i -t microsoft/powershell:ubuntu16.04 powershell
PowerShell v6.0.0-beta.7
Copyright (C) Microsoft Corporation. All rights reserved.
PS /> # pretty sure I’m in PowerShell... but that PS /> prompt looks different

Hold on, how can I be sure I'm in a PowerShell session in a Linux container?


PS /> dir
Directory: /
Mode   LastWriteTime   Length Name
----   -------------   ------ ----
d----- 8/2/17 2:10 PM         bin
d----- 4/12/16 8:14 PM        boot
d----- 9/17/17 7:09 PM        dev
d----- 9/17/17 7:09 PM        etc
d----- 4/12/16 8:14 PM        home
d----- 9/14/17 5:16 AM        lib
d----- 8/2/17 2:09 PM         lib64
d----- 8/2/17 2:09 PM         media
d----- 8/2/17 2:09 PM         mnt
d----- 9/14/17 5:16 AM        opt
d-r--- 9/17/17 7:09 PM        proc
d----- 9/17/17 7:09 PM        root
d----- 8/2/17 2:09 PM         run
d----- 9/13/17 3:58 AM        sbin
d----- 8/2/17 2:09 PM         srv
d-r--- 9/17/17 7:09 PM        sys
d----- 9/17/17 7:09 PM        tmp
d----- 9/14/17 5:17 AM        usr
d----- 9/14/17 5:16 AM        var
PS /> $PSVersionTable
Name                       Value
----                       -----
PSVersion                  6.0.0-beta
PSEdition                  Core
GitCommitId                v6.0.0-beta.7
OS                         Linux 4.9.41-moby #1 SMP Wed Sep 6 00:05:16 UTC 2017
Platform                   Unix
PSCompatibleVersions       {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion  2.3
SerializationVersion       1.1.0.1
WSManStackVersion          3.0
PS /> exit
PS C:\> # back to a Windows prompt

I just typed 2 commands and I now have a new operating system, in a tiny container, running on my local machine! Just docker pull and docker run; that's it!

Here's exactly what happened:

  • docker pull downloaded an image, and tagged ubuntu16.04, from the microsoft/powershell repository.
  • docker run created a new container from that local image, started it, and ran the command powershell. This launched a PowerShell shell in the container. You can see the "PowerShell v6.0.0-beta.7" header.
  • In the container, dir displays a decidedly Unix—not Windows—folder listing. However, the format of the directory listing is the default PowerShell layout of content, with Mode, LastWriteTime, Length, and Name fields.
  • $PSVersionTable displays version information about PowerShell. Note that the platform is Unix.
  • Typing exit closed the connection to the container, stopped the container, and returned us to a normal Windows PowerShell command prompt.

(FYI, the docker run -i -t options create an interactive terminal with the container. Learn more at Docker's documentation.)

Listing containers

So we just created a container, but where is it? We display containers with the docker ps command.


PS C:\> docker ps
CONTAINER ID    IMAGE    STATUS    NAMES
PS C:\> # no containers?

Um...what? Ah, like the Unix ps command, the docker ps command by default shows only running containers. We need to run docker ps -a to show all containers.


PS C:\> docker ps -a
CONTAINER ID IMAGE                            STATUS                          NAMES
d05a481fcbf1 microsoft/powershell:ubuntu16.04 Exited (0) 20 seconds ago     blissful_kepler
PS C:\>

(Docker command output in the walkthrough has been edited for display purposes.)

In this simple container listing:

  • You can see the image named used to create the container, and that it's not running: Status = 'Exited'.
  • Container ID has a unique value that can be used to identify and manage the container.
  • Names also has a unique value, blissful_kepler (two random strings combined together), that can be used to identify and manage the container.

Creating a container with a specific name

If you create a container, but don't specify a name, Docker creates a random name for you. And while that random value is easier to work with than the Container ID value, it's not ideal if we are going to work with the container over and over. Let's create a new container named MyContainer from the same image, and start it up.


PS C:\> # specify --name to create container with a specific name
PS C:\> docker run -i -t --name MyContainer microsoft/powershell:ubuntu16.04 powershell
PowerShell v6.0.0-beta.7
Copyright (C) Microsoft Corporation. All rights reserved.
PS /> # we are back in a container; note the PS /> prompt
PS /> # but let’s just exit for now
PS /> exit
PS C:\>

So now let's take another quick look at the container list:


PS C:\> docker ps -a
CONTAINER ID IMAGE                            STATUS                    NAMES
ffa33fdae088 microsoft/powershell:ubuntu16.04 Exited (0) 10 seconds ago MyContainer
d05a481fcbf1 microsoft/powershell:ubuntu16.04 Exited (0) 5 minutes ago blissful_kepler

Now we have two containers, both created from image microsoft/powershell:ubuntu16.04.

Stopping and removing a container

At this point, let's get rid of container blissful_kepler. First, make sure it's stopped with docker stop, and then remove it with docker rm.


PS C:\> # if you run stop and it’s already stopped, that’s OK
PS C:\> docker stop blissful_kepler
blissful_kepler
PS C:\> docker rm blissful_kepler
blissful_kepler
PS C:\> docker ps -a
CONTAINER ID IMAGE                            STATUS                   NAMES
ffa33fdae088 microsoft/powershell:ubuntu16.04 Exited (0) 2 minutes ago MyContainer
PS C:\> # only MyContainer is left

Copying content into a container

So far, this container has been helpful if you want to test small, ad-hoc commands on another operating system. But if you really want to test any preexisting code, you need to copy that code into the container and run it within the container. So let's copy over the entire PowerShell Beautifier project. On my local machine, the Beautifier is located at C:\code\GitHub\PowerShell-Beautifier. We'll copy that folder into MyContainer, under the container's /tmp path. To do this, we use the docker cp command.


PS C:\> # running this in Windows PowerShell, not from within the container
PS C:\> docker cp C:\Code\GitHub\PowerShell-Beautifier MyContainer:/tmp

Starting and reconnecting to a container

Now reconnect to MyContainer to confirm the PowerShell-Beautifier folder content is there. When we first created MyContainer, we used docker run -i -t [image name] powershell to create it from the image and start it. Now that we have MyContainer, we just need to start it with docker start, and then reconnect to it with docker exec -i -t [container name] powershell. This creates an interactive terminal with a PowerShell shell.


PS C:\> # first we need to start the container
PS C:\> # if you run start and it’s already started, that’s OK
PS C:\> docker start MyContainer
MyContainer
PS C:\> # let’s make sure it’s running
PS C:\> docker ps -a
CONTAINER ID IMAGE                            STATUS        NAMES
ffa33fdae088 microsoft/powershell:ubuntu16.04 Up 11 seconds MyContainer
PS C:\> # because it’s running we don't need to add the -a to see it
PS C:\> docker ps
CONTAINER ID IMAGE                            STATUS        NAMES
ffa33fdae088 microsoft/powershell:ubuntu16.04 Up 20 seconds MyContainer
PS C:\> # now connect
PS C:\> docker exec -i -t MyContainer powershell
PowerShell v6.0.0-beta.7
Copyright (C) Microsoft Corporation. All rights reserved.
PS /> # we are back in the container; note the PS /> prompt

Confirming folder now exists in MyContainer

To confirm the content is there, change the directory to /tmp/PowerShell-Beautifier, and get a directory listing:


PS /> # we are inside MyContainer
PS /> cd /tmp/PowerShell-Beautifier
PS /tmp/PowerShell-Beautifier> dir
Directory: /tmp/PowerShell-Beautifier
Mode   LastWriteTime     Length   Name
----   -------------     ------   ----
d----- 9/11/17 11:14 PM           docs
d----- 9/11/17 10:11 PM           src
d----- 9/11/17 10:11 PM           test
------ 6/11/17 4:05 PM   1086     LICENSE
------ 9/9/17 9:54 PM    7638     README.md
PS /tmp/PowerShell-Beautifier> # nice!

Running a test script inside MyContainer

Now let's test a PowerShell script inside the container. The PowerShell Beautifier project comes with a test script /test/Invoke-DTWBeautifyScriptTests.ps1. This script runs a series of reformatting tests to make sure the Beautifier works correctly for different scenarios. Let's run this script in MyContainer.


PS /tmp/PowerShell-Beautifier> # we are inside MyContainer
PS /tmp/PowerShell-Beautifier> cd test
PS /tmp/PowerShell-Beautifier/test> ./Invoke-DTWBeautifyScriptTests.ps1
Importing beautifier module: DTW.PS.Beautifier
Processing folder: Case
File: Commands.ps1
File: Members.ps1
File: ParameterAttributes.ps1
File: Parameters.ps1
File: Types.ps1
Processing folder: CompleteFiles
File: Clear-PSFSitecoreRecycleBin.ps1
File: Remove-PSFOldContent.ps1
Processing folder: FileEncoding
File: ASCII_NoBOM.ps1
File: UTF16_BE_BOM.ps1
File: UTF16_BE_NoBOM.ps1
File: UTF16_LE_BOM.ps1
File: UTF16_LE_NoBOM.ps1
File: UTF8_BOM.ps1
File: UTF8_NoBOM.ps1
Processing folder: Rename
File: Alias.ps1
Processing folder: Whitespace
File: CmdletDefinition.ps1
File: DotSource.ps1
File: NoNewLineAtEndOfFile.ps1
File: NonWindowsLineEnding.ps1
File: WithinLine.ps1
File: Indentation.ps1
File: Indentation.ps1
File: Indentation.ps1
All tests passed - woo-hoo!
PS /tmp/PowerShell-Beautifier/test>

Invoke-DTWBeautifyScriptTests.ps1 also has a -Quiet option. If the test script runs without any errors, then only Boolean $true is returned at the end. This is helpful for automating the test script described in the next section.


PS /tmp/PowerShell-Beautifier/test> # we are inside MyContainer
PS /tmp/PowerShell-Beautifier/test> # run the script sans output
PS /tmp/PowerShell-Beautifier/test> ./Invoke-DTWBeautifyScriptTests.ps1 -Quiet
True
PS /tmp/PowerShell-Beautifier/test> exit

Automating the testing process

Running all these Docker commands manually is a good way to learn the Docker command line interface. It's also a good way to get inside a PowerShell Core instance for another operating system, to try some small commands. But if you want to use these Docker images to automate testing of your existing PowerShell scripts, you would need to automate all the steps above. And you would want to be able to:

  • Specify one or more source paths to copy into a container.
  • Specify which test script to run in the container.
  • Specify which Docker images to test against, and validate they exist.
  • Create missing containers for the images we specify, start containers when we need to, and stop them when we're done.
  • Maybe even allow the override of the default Docker Hub project with something other than microsoft/powershell.

There's actually a script ready for you right now that does all this! Its name is Invoke-RunTestScriptInDockerCoreContainers.ps1, and it's in the PowerShell Beautifier project under /test/Automation. Check out the readme, or go straight to the source yourself. Feel free to copy the script, or clone and fork the entire project if you'd like to try out the PowerShell Beautifier yourself.

First, let's pull another image from Docker Hub. Let's get the centos7 image.


PS C:\> docker pull microsoft/powershell:centos7
centos7: Pulling from microsoft/powershell
2d490773b5db: Pull complete
8cd0d9ccbcbc: Pull complete
f54c59906966: Pull complete
963d4e7132af: Pull complete
98f59503f09a: Pull complete
Digest: sha256:a6fdb2a29195f3280cb4df171f8fbabc20d0e6fbee8bb4b2fd9de66e1f16c33e
Status: Downloaded newer image for microsoft/powershell:centos7
PS C:\>

Now let's run Invoke-RunTestScriptInDockerCoreContainers.ps1. By default, if you copy the entire Beautifier project, you may not need to specify any values for the parameters. The default values for the script parameters should cover everything. The only thing you might want to specify is -TestImageNames, which currently defaults to ubuntu16.04, centos7, and opensuse42.1. But just to show you the parameters in action, we'll specify -SourcePaths, -TestFileAndParams and -TestImageNames.

Just to be clear: currently on my machine I have images for ubuntu16.04 and centos7, but not opensuse42.1.


PS C:\> # running this in Windows PowerShell, not from within the container
PS C:\> C:\code\GitHub\PowerShell-Beautifier\test\Automation\Invoke-RunTestScriptInDockerCoreContainers.ps1 `
>> -SourcePaths C:\Code\GitHub\PowerShell-Beautifier `
>> -TestFileAndParams 'PowerShell-Beautifier/test/Invoke-DTWBeautifyScriptTests.ps1 -Quiet' `
>> -TestImageNames @('ubuntu16.04','centos7','opensuse42.1')
Testing with these values:
Test file: PowerShell-Beautifier/test/Invoke-DTWBeautifyScriptTests.ps1 -Quiet
Docker hub repo: microsoft/powershell
Images names: ubuntu16.04 centos7 opensuse42.1
Source paths: C:\Code\GitHub\PowerShell-Beautifier
Image opensuse42.1 is not installed locally but exists in repository microsoft/powershell
To download and install type:
docker pull microsoft/powershell:opensuse42.1
Testing on these containers: ubuntu16.04 centos7
ubuntu16.04
Preexisting container found
Starting container
Getting temp folder path in container
Copying source content to container temp folder /tmp/
C:\Code\GitHub\PowerShell-Beautifier
Running test script on container
Test script completed successfully
Stopping container
centos7
Preexisting container not found; creating and starting
Getting temp folder path in container
Copying source content to container temp folder /tmp/
C:\Code\GitHub\PowerShell-Beautifier
Running test script on container
Test script completed successfully
Stopping container
PS C:\> # the test script ran correctly in all containers! woo-hoo!

Conclusion

As you can see, Docker is a simple and powerful technology for packaging and distributing software. While this article focused on just using the PowerShell Core images, you should check out the Docker Hub to see what other images exist. You can also learn more about how to incorporate Docker into your development processes.

Also, feel free to submit a ticket if you have an issue or enhancement request for the test script or the PowerShell Beautifier. And feel free to contact me.

Thank you for your time and interest!

Dan Ward, guest blogger

 

Reverse Desired State Configuration: How it works

$
0
0

Nik Charlebois is a Premier Field Engineer based out of Canada. He is the author of several books on SharePoint automation, and he writes blog posts on a regular basis about all things PowerShell. You can find out more about his work here.

ReverseDSC

PowerShell Desired State Configuration (DSC) has been around for a few years now, and many organizations are using it to help them automate parts of their DevOPS pipeline. There are now several hundred DSC modules available in the PowerShell Gallery. This enables organizations to create a complex automation solution that allows them to manage, via DSC, different components of their technology stacks.

One of the most complex solutions out there to manage via DSC is certainly SharePoint. While the SharePointDSC module is very mature, having been around for almost 3 years, it still represents a fair investment for clients to re-write their Imperative deployment script into DSC. Some of the most complex SharePoint environments I have seen being deployed with PowerShell DSC exceed the 150,000 lines of code. So how can an organization that has an existing SharePoint footprint get started with DSC, without spending months re-writing their configuration? It can simply use ReverseDSC.

What on earth is ReverseDSC?

Most organizations already have an existing investment in technology solutions, and rewriting their imperative set of scripts into a declarative DSC configuration script is not a project everyone is willing to undertake. ReverseDSC is a module that provides a set of functions that can be used to reverse engineer an existing environment into a DSC script. That’s right, you can simply run a script against your existing complex environment, such as SharePoint, and automatically generate these 150,000 lines of DSC code that represent your environment. You don't need to do this manually.

What software is supported by ReverseDSC?

ReverseDSC, at its core, is an extensible solution that you can use to extract your existing environment as a Desired State Configuration (DSC) script. As long as there is a DSC Module for the technology component, you can use ReverseDSC to extract its current configuration. While ReverseDSC core is technology agnostic, and it provides interfaces to extract current configurations into DSC scripts, it does require a technology-specific component to be able to properly extract information from the existing environment.

These technology-specific scripts are referred to as Orchestrator Scripts, and include logic that is specific to the current piece of technology being extracted. We currently have the following official ReverseDSC Orchestrator Scripts available on GitHub:

Each Orchestrator Script release is closely aligned with the release of its associated DSC Module. For example, the SQLServer.Reverse script will have major releases that are aligned with major version releases of the xSQLServer DSC module.

How does it work?

Before I even attempt to answer that question, let us start by defining the following two terms:

  • Desired State: How the host should be configured
  • Current State: How the host is currently configured

Let’s look at an example to illustrate this concept. If you were to ask my wife what my desired state should be, it would be something like:

[…]

Height NiksHeight

{

ValueInFeet = 6.3

Ensure = “Present”

}

Weight NiksWeight

{

WeightInPounds = 175

Ensure = “Present”

}

[…]

However, as much as it saddens me to admit it, the result of Test-DSCConfiguration -Detailed on this “Desired State” would be False for both resources. I am just short of 5'9", and just shy of 180 pounds. That is my Current State. The goal of DSC is to make sure the Current State of an environment matches its Desired State. Unfortunately for me, DSC is not going to help me with the example above.

For the Local Configuration Manager (LCM) to check if the environment is configured as defined or not, we need a way to obtain information about both the Current State and the Desired State of each resource defined within the DSC configuration. By default, the LCM knows about the Desired State. It has that information on disk in the Current.mof file. What we are missing is information about the Current State.

Every DSC resource needs to define three core functions: Get, Set and Test. The Get method is the one responsible for obtaining information about the Current State of a given resource. The Set method is responsible for bringing the environment into its Desired State. It is where the configuration happens. And the Test method obtains information about the Current State by making a call into the Get function and compares it against the Desired State.

In the case where we have configured the LCM in ApplyAndAutocorrect mode, upon detecting that the Current State and Desired State do not match, it will make a call back into the Set function and attempt to bring the environment back in its Desired State. The following diagram shows the relations among these functions of a DSC resource:

Diagram of methods

ReverseDSC, along with the technology-specific Orchestrator Scripts, simply makes use of the Get functions of the resources to retrieve information about the environment’s Current State. Then it generates a DSC configuration script that represents that Current State.

For example, the SQLServerDSC.Reverse Orchestrator Script, responsible for reverse engineering an existing SQL Server environment into a DSC configuration script, calls into the xSQLServerMaxDop resource’s Get function. The Orchestrator Script extracts information about the current server’s maximum degree of parallelism settings, converting that information into a DSC configuration output. A fully implemented Orchestrator can call into the Get function of every resource offered by its associated DSC module.

The ReverseDSC process and related Orchestrator Scripts are only as good as their associated DSC module. If the module doesn’t support a certain feature, the ReverseDSC process won’t be able to extract it.

Real-life usage

There are dozens of potential reasons an organization would want to invest into ReverseDSC. Here is a short list of some of the most popular ones we have seen in the industry.

Replication

If you have an existing investment in a technology, and you want to replicate that exact environment elsewhere (in the cloud or on-premises), you can use ReverseDSC. With ReverseDSC, system administrators can now extract a DSC configuration from an on-premises environment, and replicate it in Azure within minutes, all while automating their processes. Disaster recovery is another reason to replicate environments, to ensure we have failover environments that match the original one.

Standardization

Isn’t it every organization’s dream to have their developers write and test code on an environment that is an exact replica of the production environment? This helps ensure that there are no unwanted behaviors once the code hits general release. ReverseDSC makes this very simple to do. Simply extract the DSC configuration of your production environment, and use the output DSC script to generate dozens of developer environments matching its exact configuration. Your organization could even run a regular ReverseDSC exercise against the production environment (for example, every week or month). You can put that latest extract onto a DSC pull server, for all developers’ workstations to automatically get updated with the delta of whatever changed in production since the last extract.

DSC on-boarding

Another usage scenario is where an organization simply wants to on-board their existing systems onto DSC. Perhaps the organization wants to benefit from the DSC's monitoring capabilities (ApplyAndMonitor) and self-attempt at preserving its configuration (ApplyAndAutocorrect). An organization can now run ReverseDSC against an existing environment, extract the DSC configuration script that represent it, and push it right back at its LCM for it to start monitoring any changes done to it. This is basically saying to the environment: “Tell me how you are currently configured, and by the way your current configuration is now also becoming your Desired State from now on.”

Documentation and auditing

Organizations that have already invested time and money in writing deployment scripts to build their environments most likely are not thrilled about having to re-write the whole thing as a DSC configuration script to leverage DSC’s monitoring feature. Instead of having to rewrite all their scripts, they can now simply build a base environment that use these “imperative scripts,” and then run ReverseDSC on the resulting environment. While this does not directly convert deployment scripts into DSC, it provides the organization with the same results in the end.

Others

Last but not least, documenting and comparing changes in time for an existing environment is another very popular scenario. Organizations can run ReverseDSC against their environment at frequent intervals, generate the output, and compare it with the output from the last extract to see if anything has changed. Using side-by-side comparison tools allows organizations to easily audit what changes have happened to an environment over a given period of time.

Several other scenarios, such as using the output DSC configuration script to analyze an environment against best practices, have also been considered. Currently there are no formal plans to embed such logic into the Orchestrator Scripts.

Summary

We are thrilled to see all the excitement from the community around the concept of ReverseDSC, and we encourage you to get involved in the projects by contributing on GitHub. Please take the time to try out the various Orchestrator Scripts that are already being offered, and take the time to log issues and feature requests.

Next time, we will dive into the process of using ReverseDSC to help you migrate a SharePoint on-premises farm into Azure infrastructure-as-a-service.

Nik Charlebois

Premier Field Engineer, SharePoint

 

 

PowerShell support for certificate credentials

$
0
0

Summary: It's not a very well-known feature, but the PSCredential object, and the PowerShell Get-Credential cmdlet, both support certificate credentials (including PIN-protected certificates). In this post, we take a look at how a certificate credential is marshaled inside a PSCredential object, how you can do this marshaling yourself, and how you can retrieve the original certificate from a PSCredential object supplied to you.

Most Win32 APIs that support the PSCredential object for credential validation already support certificates. However, if your code currently consumes a PSCredential, and you use the user name and password without expecting a certificate credential, you can make the necessary adjustments yourself.

All the code for this walkthrough can be found here.

Get a certificate inside a PSCredential object

The PSCredential object has only two properties, 'UserName' and 'Password'. To wedge a certificate into this format, you must use the CredMarshalCredential API. This API takes a credential type, and a credential struct, and it produces a string representing the credential. As of the time of this writing, the credential types that are supported are CertCredential and UsernameTargetCredential.

This means we can generate a string from a certificate credential, and then set the 'UserName' field of the PSCredential object to this string. If the certificate is PIN protected, the PIN can be wrapped in a SecureString, and set as the Password property on the PSCredential.

This process is exactly what the Get-Credential cmdlet does in PowerShell (on Windows). If you run Get-Credential, you will get the standard credential dialog box.

Screenshot of credential request

Select the down arrow on the right side. From the drop-down list, you can select certificates that match the User Certificate criteria. (Generally, the dialog box shows certificates in the Personal and Trusted Root stores of the current user.)

Screenshot of list of certificates

Selecting my smart card results in the following:

Screenshot of smart card credentials

 

When I'm all done, the resulting output looks like this:

UserName                                                               Password
--------                                                                      --------
@@BkvQpJ93JShl3nkv7tyCbLL#wO                    System.Security.SecureString

As you can see, the 'UserName' field doesn't really look like a user name. It is, in fact, the encoded certificate data returned from CredMarshalCredential.

This marshaled credential can now be used with many standard Windows authentication APIs, including LogonUser. However, if you're building a .NET application, and you want to support PSCredentials that use a certificate, you will have to unpack this special UserName property. (This is true unless you only ever intend to pass along the marshaled credential to supporting APIs.) Let's see how we can do that.

Get a certificate from a PSCredential.UserName blob

If you want to locate the certificate represented by a PSCredential.UserName data blob, you can use the CredUnmarshalCredential API, which is the logical inverse of our trusty CredMarshalCredential. You can pass your UserName string, and receive the CERT_CREDENTIAL_INFO struct back, which has the SHA-1 hash of the original certificate. Your application can then do any certificate lookup you want (assuming your application or service has the correct permissions).

For full details on how to perform this call, see the ReverseMarshal function in the example code here.

Matt Bongiovi

Software engineer, Active Directory Fabric team

 

Script Wars: The Farce Awakens (part I)

$
0
0

Summary: Seven years ago, “Hey, Scripting Guy!” was approached by a member of the community to see if we would be interested in doing something a little fun for the holidays.  The result was a seven-part series called “A PowerShell Carol.”

So when we were approached continuously in following years, we allowed this madness to continue! The tradition continued on with the "Blues" in Blueville and Curly Blue, amongst others, doing a twist on popular holiday stories. The community even dared to touch upon classic tales, with "Oliver Script," a "twist" on another famous story by Mr. Dickens.

And now, we bring you the 8th annual “Hey, Scripting Guy!” holiday special.

Script Wars: The Farce Awakens

It is a period of confusion for the software industry. Software companies from eons in the past had computer systems that contained rudimentary operating systems and very few commercial, off-the-shelf solutions. Software and Infrastructure as a Service were unheard of in these days.

Over time the landscape changed and improved. New factions, such as the IT professional, split apart from the old, as systems became more user friendly and business needs changed. Automation formed upon the horizon.

But the early automation technologies were challenging, and required strong training to use them.   Amongst this came best practices and top-down software design.

The old republic of the IT professionals has begun to enter retirement, and the use of older solutions such as vbScript has begun to fade.

A new breed of young upstarts has appeared, taking advantage of newer and simpler solutions such as Windows PowerShell. This story is of one such young upstart, a brilliant and gifted individual called Rey Skyworker.  

Her story begins as we meet her discussing the concepts and history of automation with her instructor, one Ben Kerberosie.

--------------

Ben and Rey were sitting together, discussing the use of Windows PowerShell to clean up Active Directory.

“Rey,” the instructor asked as he looked over his bespectacled and soldered glasses, “If I were to try to clean up an Organizational Unit in Active Directory, what types of Cmdlets would I use?”

The student thought about it for a moment. “I would probably use a Get-ADUser Cmdlet, and then pipe that to a Remove-ADObject.”

Ben pulled over some code onto his screen. “A sample like this would work then?”

$TargetOU='OU=Offices,DC=Contoso,DC=local'

Get-Aduser -Filter * -searchbase $TargetOU | Remove-ADObject -confirm:$False

Rey was about to nod her head when she noticed something. "Technically the script works, but I'm not sure I would put this into production."

The old Master looked over her. "Why is that? Don't be afraid to speak your mind."

Rey looked over at the screen. "First, I would have never used the '*', which is a wildcard character. If for any reason the $DestinationOu was targeted at the root, that would have had a major impact to production. Personally, I would have written that parameter as a separate object."

She showed some sample script to Ben.

$TargetOU='OU=Offices,DC=Contoso,DC=local'

$ADFilter='Name -like "*smith"'

Get-Aduser -Filter $ADFilter -searchbase $TargetOU | Remove-ADObject -confirm:$False

"This…," she noted, "would at least limit the impact to the only users with 'Smith' in the name. I could write a more advanced filter, but using '*' would not be a good career option."

"The second thing I noticed is a lack of the '-whatif' parameter. Whenever I write a PowerShell script, I like to have that option on anything that does something destructive. That is, if the Cmdlet supports it."

"Good thinking, but why not just use a '$True' on the confirm switch?" Ben noted.

"Confirm:$True would pause for each execution of the Cmdlet, but the 'whatif' parameters shows me quickly a count of how many would try to execute. It's a good way to see immediately if I am removing 1 or 1,000,000 accounts. I would immediately know that I didn't trap for something in my code."

She expanded upon her script with this example.

$TargetOU='OU=Offices,DC=Contoso,DC=local'

$ADFilter='Name -like "*smith"'

Get-Aduser -Filter $ADFilter -searchbase $TargetOU | Remove-ADObject -confirm:$False -whatif

"Ahhhh…" the old one smiled. "I see you are strong in the ways of The Farce."

Rey scratched her head, puzzled. "The wha… is that some kind of joke?"

"Nope. The Farce is what I use to describe the differences between a good scripter and a bad one.   Really, "bad" one is not the correct word, it would be 'One who is not thinking before they execute'."

Rey was interested. "Tell me more…"

"Well…," Ben began, "In the old days of vbScript, we had developers writing scripts and automation because it wasn't something the average IT professional was up to. It was a pretty intense little language."

Rey remembered seeing the old vbScripts in some virtual machines from training. They were quite complex, with many checks and balances.

"In the modern world we have Windows PowerShell, which is incredibly easy to use. But it's also easy to make mistakes. "

"The other challenge is the need of the business to get things into production fast, while making sure checks and balances are in place with a limited budget and new workers. This is the Farce. But if you are strong in its ways, you can write scripts quickly, but also know how to put in simple checks and balances to keep your code strong in the light side of the Farce."

Ben looked over at Rey. "You must continue your training in the ways of the Farce if you are to come with me to…"

"ALDERAAN?!?!?!" she burst out excitedly.

"Well…," Ben looked over, "actually Contoso Holly Jolly Hat Company. I have need for some new help desk staff, and I would love for you to start with us."

Rey was excited. Her first real job in IT, and just in time for the holidays. All those days of hard work were paying off. She was going to get paid doing what she loved to do.

Stay tuned tomorrow, when we continue the story of "Script Wars: the Farce Awakens."

 

Sean Kearney, Premiere Field Engineer

Enterprise Services Delivery, Secure Infrastructure

 

Script Wars: The Farce Awakens (part II)

$
0
0

Summary: Yesterday, we met a newly graduated IT professional, Rey Skyworker, as she discussed the ways of "The Farce." It was during this discussion that her instructor, Ben Kerberosie, discovered she had a natural gift in understanding how to implement some good practices into writing her scripts.

Today, we sit quietly (I mean all of you in the back too, no chattering and tossing about popcorn!), as Rey is about to embark on her new job.

The position: on the help desk. The company: Contoso Holly Jolly Hat Company.

She is being introduced to her new co-worker on the help desk, Jeremy Tinnison.

-----------------

Jeremy shook her hand.  "Welcome to Contoso, Rey, I'm Jeremy but everybody around here just calls me Tin."

She looked up. "Ok Tin. Interesting environment you have here. So what do you do here mostly?"

Ben looked over. "Go ahead Tin, let her know of the challenges you're having. She's well versed in Windows PowerShell and should be able to help you past some of your automation challenges."

Tin was about to speak when a small vacuum with googly eyes bumped into Rey's feet.

"What th…," she looked down as it rapidly began to try vacuuming her shoelaces. The battle to regain control of her feet was more than amusing. The small, football shaped object rolled about as if in disgust.

"That," muttered Ben, "is one of the experiments from R&D. It is the 'Trash Bagger 7.5' or 'TB-7' for short. It roams the office trying to find garbage, gum wrappers or bits of LAN cables. It occasionally makes mistakes like you just saw. Sometimes we find it picks up the odd power cord and unplugs systems. They are working on Release 8, which should resolve these unexpected issues."

Rey looked down, smiling at the rolling, blinking nightmare. "Silly thing."

"So!" burst out Tin, "Let me tell you about one of our current challenges. We have a Windows PowerShell script that creates a user in our Azure Active Directory environment. Most of the process is manual still for Office 365 for licensing. Our short-term issue is that we need to find a way to trap for errors in the script."

He showed Rey the initial script they used to provision users in Azure Active Directory.

$First=Read-Host 'Enter First Name:'

$Last=Read-Host 'Enter Last Name:'

 

Connect-AzureAd

 

$DisplayName=$First+' '+$Last

$Mailnickname=$First+$Last

$UserPrincipalName=$First+$Last+'@contoso.com'

 

$TempPassword='BadPassword4U!'

$PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile

$PasswordProfile.Password=$TempPassword

 

New-AzureADUser -DisplayName $DisplayName -GivenName $First `

-Surname $Last -AccountEnabled:$true -PasswordProfile $PasswordProfile `

-MailNickName $MailNickname -UserPrincipalName $UserPrincipalName

Ben stepped up. "Occasionally some of the new staff don't follow the instructions, and the script throws an error. We'd like to find a way to trap for the various errors in PowerShell. It may cause the script to stop, but we'd like to find a way to clean up or return codes back to calling scripts down the road."

Tin ran the script to demonstrate, by deliberately giving blank values for the first and last name.

Screenshot of PowerShell

"Rather than this appear," continued Tin, "we'd like to be able to trap for the individual types of errors in PowerShell whenever possible. Logging them should happen but we want to be able to action the script in certain scenarios."

"You can if you like," ventured Rey. "Access the $Error variable in PowerShell. It's not just a text array, it's actually an object which contains all the information on errors."

Rey stored away the value of the last error on the screen to view its properties.

$ErrorToSee=$Error[0]

"If we pipe this into Get-Member, we can see it has many properties, including one called 'Exception.'"

Screenshot of PowerShell

"The 'Exception' property contains the actual object with the exception value PowerShell caught. We can view it like this."

Screenshot of PowerShell

Tin's eyes lit up. "Oh! If I run that against Get-Member, will it expose more information?"

Tin piped the output to Get-Member to view the additional properties.

Screenshot of PowerShell

Rey looked at the output. "In this case, it's not a property we need but the method type that failed. We can use the GetType() method to pull this information out."

Screenshot of PowerShell

"From the screen, we can see the error type is 'ApiException' but we need the full name for it.  Fortunately, we can look for other members with 'Name' in their description."

Screenshot of PowerShell

Tin looked down at the results. "I'm guessing 'FullName' is just too obvious?"

Rey nodded. "Just add it on to the gettype() and you'll have your answer," as she typed quickly into the console.

Screenshot of PowerShell

"Now to use this, we just use a 'Try Catch' statement in PowerShell. This block of script's job is to literally 'Try it out' and put in errors to 'Catch for'. This allows us to write code to mitigate, report, or trap the errors."

Rey added a simple Try Catch statement around the existing block of code in the PowerShell script, using the identified error.

$First=Read-Host 'Enter First Name:'

$Last=Read-Host 'Enter Last Name:'

 

Connect-AzureAd

 

$DisplayName=$First+' '+$Last

$Mailnickname=$First+$Last

$UserPrincipalName=$First+$Last+'@contoso.com'

 

$TempPassword='BadPassword4U!'

$PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile

$PasswordProfile.Password=$TempPassword

 

Try

     {

     New-AzureADUser -DisplayName $DisplayName -GivenName $First `

     -Surname $Last -AccountEnabled:$true -PasswordProfile $PasswordProfile `

     -MailNickName $MailNickname -UserPrincipalName $UserPrincipalName

     }

Catch [Microsoft.Open.AzureAD16.Client.ApiException]

     {

     Write-Output 'A blank name was supplied.   Restart the script.'

     }

}

 

When they re-ran the script with the names supplied as blank, the results were far nicer.

Screenshot of PowerShell

Tin looked up. "Oh! I can trap for additional errors as well?"

Rey noted, "Just add an additional Catch statement for each unique error condition. I used Write-Output as an example, but you can put in any PowerShell code to deal with the errors."

Tin was excited! "What else can we do with this script? I want to make all of this seamless for the staff!"

Stay tuned for tomorrow's episode of "Script Wars," as Rey touches on more ways to make the script stronger with "the Farce."

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

 

 


Script Wars: The Farce Awakens (part III)

$
0
0

Summary: When last we saw our hero, Rey Skyworker, she was aiding her new co-worker, Tin. They managed to implement better error trapping by using the "Try Catch" statement in Windows PowerShell. The fact they could do this so easily it has tickled Tin's curiosity.

----------------------------

"Now Rey, I'm looking at this and as I see it, I could avoid the error altogether by trapping for those situations. Maybe adding in a few IF statements like this to the script."

If (($First -or $Last)) -eq '') then

{

Write-Output 'Names cannot be blank

}

Else

{

….rest of the script

}

Rey nodded. "You mean adding in some data validation?  We can definitely do that, but a much better way would be to remove the READ-HOST statements, and use parameters. We can add validation directly to the parameters."

She changed the following lines from this:

$First=Read-Host 'Enter First Name'

$Last=Read-Host 'Enter Last Name'

To this:

Param(

[String]$First,

[String]$Last

)

Tin scratched his head. "How would admins run the script? Does it prompt for the information?"

Rey shook her head. "First you would run the script in this fashion."

.\NewContosoAzureUser.ps1 -First 'John' -Last 'Smith'

"In this setup, you now do create users in bulk, by grabbing data from another source and sending it to the script from the command line."

Ben looked down. "That's a great improvement. But in the short term, we have a lot of techs trained on the manual approach. How can this be adapted to meet those needs?"

"By using validation on the parameters." Rey looked up. "In this setup, we can define that the values are permitted to be null or not, assign minimum/maximum lengths, and even specify the object type. In the example above, these objects will default as a [String] type."

Rey keyed into the PowerShell console the following line, to show how to leverage parameters in an Advanced function.

Get-Help About_Functions_Advanced_Parameters

Drilling through the content, she found a good example of what she was looking for. "Ah! Here we go. The ability to ensure the parameter cannot be empty or null."

Screenshot of PowerShell

Tin looked over. "Hang on, that seems too easy. If I just do this change to the code."

Rey nodded. "It actually is. So first, if we just want to ensure that $First is never a blank and is a required value, we would add in this."

She altered the beginning block of code from this:

Param(

[String]$First,

[String]$Last

)

 

To this:

Param(

[Parameter(Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[String]$First,

[String]$Last

)

…And then re-ran his script without providing any values to see the result. They hit enter when it prompted for "First:"

Screenshot of PowerShell

Tin was jumping up and down excitedly. "This is too awesome! So if anybody makes a mistake, I can actually TRAP for it BEFORE it tries to use the bad data? Hey, wait, I think I can do the second parameter!"

Tin added the additional lines for $Last to force it to be required information.

Param(

[Parameter(Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[String]$First,

[Parameter(Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[String]$Last

)

"Now hang on… I just realized something. We do have the occasional account that doesn't need a last name. Ones like 'Marketing' or 'Finance.' How can we ensure there is a value, and allow that one to be not assigned by the user?"

To identify the solution, Rey scanned through the help from About_Functions_Advanced_Parameters. "Ah! Here it is. Remove the 'NotNullOrEmpty' and replace it with 'AllowEmptyString'. In this scenario, it will still prompt, but allow the value to be a blank."

Param(

[Parameter(Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[String]$First,

[Parameter(Mandatory=$true)]

[AllowEmptyString()]

[String]$Last

)

They re-ran the script to see the results, supplying a value for $First when prompted, and nothing for $Last.

Screenshot of PowerShell

Ben looked over. "Looks like the pair of you will make a great team in this battle to smooth out our processes!"

Just as Ben finished talking, TB-7 decided to start chomping on Rey's shoelaces.

Return tomorrow for another exciting adventure in "Script Wars." There will be more exciting PowerShell coolness, and we hope Rey will get her shoelaces back.

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

 

Script Wars: The Farce Awakens (part IV)

$
0
0

Summary: A quick recap shall we? Newly graduated from IT training and future guru in the world of Windows PowerShell, Rey Skyworker has been hired on by her mentor, Ben Kerberosie. Already, she has helped peers from her team improve their PowerShell code to do some basic error trapping and cleanup of the data upon entry.

Not bad for somebody on their first day, eh?

Oh, almost forgot, her shoelaces have been chosen as lunch by a new Beta cleaning robot with more suction than common sense.

Shall we continue? Let's!

--------------------------

Tin has been left back to tend to the help desk, while Ben continues down the hall to introduce Rey to the shipping department.

"They nickname themselves 'Falcon.' They have a very efficient shipping system, supplemented with some Windows PowerShell scripts. I'd like you to see if there is anything that can be done to improve it."

Rey nodded, "But I need to ask, why do they nickname themselves 'Falcon?'"

Ben smiled. "I'll let them explain that one to you."

Coming around the corner, they ran into two fellows. One walked in with a swagger, the other with such a heavy beard he looked like the character from the old TV show, Grizzly Adams.

"Rey, I'd like you to meet Don Yolo and Lou Hackah. Our team from shipping with a flair for some PowerShell!"

"Hey!" Don offered up his hand to shake, while all you heard from Lou was a loud growl.

"Don't worry about Lou, he tends to grunt and growl an awful lot lately. He threw his back out the other week trying to lift a pallet of Tilley hats on a bet. 'Just how heavy could they be?' he asked me. 'They're just hats!' he kept saying."

Lou just looked over at Don, suggesting he not do the "I told you so" look again. The motion caused him to roar out with pain.

"Strange too, Don seems to understand everything Lou says when he yowls in pain or grunts. They have a very special relationship you see."

Rey smiled. "Don, I have a silly question. Why do you nickname yourselves the 'Falcon' group?"

"Oh that!" Don smiled. "We're the fastest shipping division in the area. We usually ship 12 parcels in the time is takes to do a Kettle Run!"

"Kettle Run?" Rey looked over quizzically.

"Yes, you know. The time it takes to boil water in a kettle? For tea?"

Ben looked over. "I told you it was worth asking," he winked.

"So Don, they tell me you've some really cool PowerShell scripts running here!"

The scruffy looking fellow reached past a fenced-in collection of nerf balls on his desk to open up the console. "I don't know about cool, but we have a simple script we use constantly. It keeps us and the customers aware of the shipping status."

Don opened up the script in his PowerShell ISE.

$r=Invoke-RestMethod -Uri 'https://shipping.contoso.com/api.svc' -Method Get -OutFile m.pdf
If ($r.result -eq 'SUCCESS') { Send-MailMessage -Attachments m.pdf -Bcc ship@contoso.local -From shippingsystem@contoso.local -Body "Confirmation of Shipment for Customer $($r.Name) - Shipped $($r.date)" -To "$($r.email)" -Subject 'Shipping confirmation' -SMTPServer 'smtp.contoso.com' } else { Write-Host "Tr Err $($.errorcode)" }

Rey's eyes popped open. "What does it do?"

"Well, it is a pretty simple script. It connects to our internal REST API, written by our development team for the shipping system. It pulls down the status of all shipments and when they are complete, it emails us here at Falcon and the customer with the status."

Rey and Ben both looked at the code. They understood what was being said, but the code was hard to understand. Ben looked over at Don. "Do we have some documentation on this? It sounds like a useful business tool."

Don shrugged his shoulders. "Really haven't had the time." Lou roared in pain as he laughed.

"We could," Rey suggested, "make some minor changes to the code. By spacing it out, adding more variables and comments…. It could be self-documenting."

Ben looked over. "That could work. Don, if you have a few minutes to work with Rey, this could allow you to take that long desired vacation you and Lou were due for. With documentation, we could even offload this as a standard script for Infrastructure to manage."

"Less on my plate?" Don jumped up. "Let's do it!"

The first thing Rey did was back up the original script. She then broke apart the script with some spacing.  "I find separating the lines visually and using tabs to indent blocks makes things easier to view when I have to troubleshoot."

$r=Invoke-RestMethod -Uri 'https://shipping.contoso.com/api.svc' -Method Get -OutFile m.pdf

If ($r.result -eq 'SUCCESS')

{

Send-MailMessage -Attachments m.pdf -Bcc ship@contoso.local -From shippingsystem@contoso.local -Body "Confirmation of Shipment for Customer $($r.Name) - Shipped $($r.date)" -To "$($r.email)" -Subject 'Shipping confirmation' -SMTPServer 'smtp.contoso.com'

}

else

{

Write-Host "Tr Err $($.errorcode)"

}

"We can also use the backtick character to split the one long line apart into more readable chunks. You just need to make sure there is a space preceding its use."

$r=Invoke-RestMethod -Uri 'https://shipping.contoso.com/api.svc' -Method Get -OutFile m.pdf

If ($r.result -eq 'SUCCESS')

{

Send-MailMessage -Attachments m.pdf -Bcc ship@contoso.local `

-From shippingsystem@contoso.local `

-Body "Confirmation of Shipment for Customer $($r.Name) - Shipped $($r.date)" `

-To "$($r.email)" -Subject 'Shipping confirmation' `

-SMTPServer 'smtp.contoso.com'

}

else

{

Write-Host "Tr Err $($.errorcode)"

}

Don smiled. "Hey that's pretty cool. I didn't know about that backtick trick. I mean the script works, but it does look easier to read and understand already."

"We can also use very descriptive variables in PowerShell," Rey offered up. "This makes it almost like you're reading a document that has the script explain itself to others editing it or supporting it. I'm guessing $r is the Result of the REST Method?"

Don nodded as Rey made a slight update to the code.

$ResultOfShippingQuery=Invoke-RestMethod -Uri 'https://shipping.contoso.com/api.svc' -Method Get -OutFile m.pdf

If ($ResultOfShippingQuery.result -eq 'SUCCESS')

{

Send-MailMessage -Attachments m.pdf -Bcc ship@contoso.local `

-From shippingsystem@contoso.local `

-Body "Confirmation of Shipment for Customer $($ResultOfShippingQuery.Name) - Shipped $($ResultOfShippingQuery.date)" `

-To "$($ResultOfShippingQuery.email)" -Subject 'Shipping confirmation' `

-SMTPServer 'smtp.contoso.com'

}

else

{

Write-Host "Tr Err $($ResultOfShippingQuery.errorcode)"

}

"What I like to do as well," Rey noted, "is use a variable for anything that is being passed as a parameter to a cmdlet. It allows to me to put all the information somebody might need to change, such as an email address, outside of the code. It also makes that part easier to view and edit for a non-scripter or level 1 support."

She updated the script to move all content being passed as parameters as new variables, near the top of Don's script.

$ShippingManifest='m.pdf'

$ShippingApplicationRESTAPI='https://shipping.contoso.com/api.svc'

$ShippingApplicationMethod='GET'

$ResultOfShippingQuery=Invoke-RestMethod -Uri $ShippingApplicationRESTAPI -Method $ShippingApplicationMethod -OutFile $ShippingManifest

$ShippingStatus=$ResultOfShippingQuery.result

If ($ShippingStatus -eq 'SUCCESS')

{

$From='shippingsystem@contoso.local'

$Bcc='ship@contoso.local'

$SMTPServer='smtp.contoso.com'

$To=$ResultOfShippingQuery.email

$ShipmentDate=$ResultofShippingQuery.date

$CustomerName=$ResultOfShippingQuery.Name

$Subject='Shipping Confirmation'

$Body="Confirmation of Shipment for Customer $CustomerName - Shipped $ShipmentDate"

Send-MailMessage -Attachments $ShippingManifest -Bcc $Bcc -From $From -Body $Subject -To $To `

-Subject $Subject -SmtpServer $SMTPServer

}

else

{

Write-Host "Tr Err $ShippingStatus"

}

Don glanced over. "I'm blown away! You're right. It does almost read like a document."

"You can also add lines with a number sign as comments, to further explain parts of the code," Ben noted.

He entered in a simple example for Don and Lou to see.

# Shipping Rest API connection documentation

# stored on \\Contoso-fs\Docs\Dev\ShippingAPI.docx

# For explanation of use and updated documentation please refer here.

$ShippingManifest='m.pdf'

$ShippingApplicationRESTAPI='https://shipping.contoso.com/api.svc'

$ShippingApplicationMethod='GET'

"Wow! So it looks like there are some standard ways things should be written in Windows PowerShell. Is there somewhere online I can view this?"

"I'll email some great links to help with how you should be writing PowerShell code." Rey emailed the following links to Don.

Style guidelines for the creation of DSC resources:

https://github.com/PowerShell/DscResources/blob/master/BestPractices.md

https://github.com/PowerShell/DscResources/blob/master/StyleGuidelines.md

An excellent offering from the community on Style guidelines for PowerShell:

https://github.com/PoshCode/PowerShellPracticeAndStyle

Rey waved a quick goodbye to Don and Lou. They were about to head to the door, when a large, ominous presence stood blocking their way.

"WHO DO I SPEAK TO ABOUT CLEAR TEXT PASSWORDS in POWERSHELL!?!"

Who is this mysterious entity? What are its intentions? Will TB-7 attack its shoelaces instead?

Find out these and other bizarrely enticing answers in the next exciting episode of "Script Wars."

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

Script Wars: The Farce Awakens (part V)

$
0
0

Summary: Rey is skilled in PowerShell, and new to Contoso Holly Jolly Hat Company. Already she has been a great boon to her new co-workers and had her sneakers become the subject of a hungry trash robot.

She was just about to walk out to the hall with Ben Kerberosie, when suddenly an dominant figure has blocked them, with the following words booming through the hall:

"WHO DO I SPEAK TO ABOUT CLEAR TEXT PASSWORDS in POWERSHELL!?!"

----------------------

That entity was none other than Katherine Phantampa! Head of IT Security, and also known as The Gatekeeper. The Protector of Data. The Guardian of all that was physical and logical.

It was said, if there were any questions regarding security, she would respond with "There is only Zuul."

Many rumors about her were discussed amongst employees at Contoso:

  • The day a third of the call center desktops were shipped out to head office because of screens left unlocked during lunch hour!
  • The rumors of nightly inspections for Sticky notes on keyboards and monitors!
  • The echoing story of the day an IT administrator left the data center door jammed open with a chair, and walked away to the water fountain for a sip. They say the conversation that day shocked IT so much, they all now wear portable water coolers….

To say Katherine was a force to be reckoned with was an understatement. She wasn't a mean person, and quite often was the first one to buy donuts on Fridays, or offer a cheery "Hello!" It was just that she was incredibly serious about ensuring the environment was secure to protect customer data.

People feared and respected her. So when you heard that voice booming, something was up. And did we mention this was "Day 1" for Rey?

Ben fortunately took the lead, understanding that if Katherine was angry, it was a large concern.

"Where is this happening? We've got a newly hired PowerShell expert with us. I'm certain she can help mitigate the issues. Clear text passwords are not an option. I completely agree, let's sort this out."

Seeing Ben immediately want to work put Katherine at ease. "Okay, well, it's not all the scripts, but in particular it's the ones that are managing Office 365 and Azure that are a concern to me. I've seen some lines like this being used to build the credentials to automate the solutions."

$SecurePassword=ConvertTo-SecureString 'Office365AdminP@ssw0rdSekre!' -asplaintext -force

$Credential=New-Object System.Management.Automation.PsCredential ('o365admin@contosohats.onmicrosoft.com',$SecurePassword)

"Well that is troubling," noted Ben. "Rey, what do you make of this?"

Rey looked at the code. The process was sound, but Katherine was correct. If for any reason this script was compromised, global admin credentials to their Office 365 infrastructure would be in the wild. She knew this was a large risk for the company.

She thought for a moment. "There a few options. One I might suggest that isn't too difficult to implement and still gives staff the ability to leverage a solution like this is to store the password in a f…."

You could almost see the lasers burning from Katherine. "A clear text password in a file?!?!"

Rey understood. Her wording was incorrect. "Sorry, I meant we can prompt for the password and store it in a SecureString format. The file is only good for the machine it was created on, so if compromised it can't be read anywhere else."

Rey wrote some PowerShell code to produce such a file:

$Credential=Get-Credential

$Password=$Credential.GetNetworkCredential().password

$SecureString=ConvertTo-SecureString $Password -asplaintext -force

$EncryptedStringASCII=ConvertFrom-SecureString $SecureString

Out-File -FilePath credpasssecure.txt -inputobject $EncryptedStringASCII

"This won't eliminate the threat, but it will limit the password access to only the machine it's created on. So if I got this file, I can't put the password back together on my laptop. To move this into production with their existing script, they would do this."

Rey provided some sample code in Windows PowerShell.

$EncryptedStringASCII=Get-Content credpasssecure.txt

$SecurePassword=ConvertFrom-SecureString $EncryptedStringASCII

$Credential=New-Object System.Management.Automation.PsCredential ('o365admin@contosohat..onmicrosoft.com',$SecurePassword)

Katherine seemed a bit more at ease. "Okay, so it's not blatantly obvious in the code now. I still don't like that there is a file on the machine with this information. The risk is lower, but I'd prefer something a bit more robust."

Rey thought for a moment. "Since we're talking Office 365 and Azure AD credentials, we could look into using Azure Automation. It is designed ideally for this scenario."

Ben looked over. "Azure Automation? What's that?"

"To really simplify it, it's a giant task scheduler in Azure that can work against all components of Azure, including Office 365. But more correctly it's an Orchestrator with Role-Based Access Control, to limit who can and cannot use or access certain features. Being that it's in the Azure datacenters, internet outages also don't affect how those scheduled processes communicate to Office 365."

Katherine brightened up. "So our credentials for scheduled PowerShell scripts would occur directly within the Azure datacenter? Very nice! But how does this mitigate the clear password nightmare?"

Rey smiled. "Within Azure Automation, you can store information as secure assets, accessed by special cmdlets in the scripts. The password and user ID is manually entered, and then only the cmdlet can access the data. The other advantage to this solution is that when passwords need to be updated, the asset is updated and the scripts are targeting the asset."

"No need to rewrite and update scripts because of passwords!" burst out Ben.

Rey connected to her trial copy of Azure to show how the credentials are stored in Azure Automation.

"Once an automation account is created and an initial Runbook set up, you can select credentials to add a credential securely. As you can see, the password is obscured even at entry…."

Screenshot of automation account

"When you are editing a Runbook, which is what holds the PowerShell script, you just choose the credential and select Add to canvas."

Screenshot of runbook options

"As you can see, we just need to add a variable to hold the credential. The script will function as before, without exposing the credentials to anybody who should not have them."

Screenshot of credential code

Katherine was very impressed. "I can see why we hired her! Okay, so for the short term we'll look at a combination of interactive logins and some limited use of SecureString text files. I'll also set up a business case on getting Azure Automation into trial to test out its use in our environment. Thank you, Rey!"

And so the battle was over. Good common sense and script design, as well as some secure approaches to credential use, saved the day.

We thank you all for checking out this year's "Hey, Scripting Guy!" holiday special. We hope it was both entertaining and informative for you.

From all of us to all of you, happy holidays!

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

Cross-platform PowerShell remoting in action

$
0
0

Summary: Learn how to configure and use cross-platform PowerShell remoting in PowerShell Core.

I’m Christoph Bergmeister, a London-based full stack .NET software developer with a passion for DevOps. To enable product testing of .NET Core apps across non-Windows machines in CI, I was one of the first people to use the new cross-platform remoting capabilities based on an integration of OpenSSH with PowerShell Core. In the early days, setup was very tedious, and as part of my journey I had to experiment a lot to get everything working together nicely. Today, I want to share with you some of what I learned.

Introduction

Some of you might have heard that the next generation of PowerShell is cross-platform. Currently it is known as PowerShell Core, has a version number of 6, and is available as a Beta on GitHub. At the moment, it offers a subset of cmdlet coverage compared to Windows PowerShell. But Microsoft has shifted their development effort towards PowerShell Core, and therefore at least the engine is already a superset. As part of making it cross-platform, the goal is also to allow remoting from any operating system to any operating system, by using a similar syntax and experience of using PSSessions and the Invoke-Command cmdlet. I have used this to create a cross-platform CI testing system. It executes PowerShell deployment scripts in an agnostic way, against remote machines that can be either Windows or Linux. I will showcase what is needed to wire everything up. Disclaimer: Although I learned quite a bit about OpenSSH and how it works, I am no expert, and all I will show you is how to configure it such that it works. I welcome comments on my setup procedure.

 Configure PowerShell remoting

This example shows how to configure remoting from a Windows client to a Linux host, which is the most common scenario. The setup is similar in other configurations of Windows/Linux as a client/host.

Apart from installing PowerShell Core on the client and host machine, we also need to install OpenSSH on both machines. OpenSSH on Linux can be installed on Ubuntu/Debian machines as ‘sudo apt-get install openssh-server openssh-client’ or on RHEL/CentOS/Fedora using ‘yum -y install openssh-server openssh-client’. On Windows, the PowerShell team has created a port named Win32-OpenSSH, which is in pre-release state as well. See the detailed instructions here or here. (For example, you can use Chocolatey, although Chocolatey is a third-party tool, not officially supported by Microsoft.) When I did it the first time, I followed the whole manual process to understand the components better. But if you just want to install everything that you probably need, the following chocolatey command should do:

choco install -y openssh -params '"/SSHServerFeature /SSHAgentFeature"'

Now we still need to configure OpenSSH on the client and host side, by using RSA key based authentication.

Edit the file ‘sshd_config’ as an Administrator in the OpenSSH installation folder (which is something like ‘C:\Program Files\OpenSSH-Win64’). Uncomment the following lines (by removing the hash character):

  • RSAAuthentication yes
  • PubkeyAuthentication yes
  • PasswordAuthentication yes

Also add PowerShell Core as a subsystem in sshd_config by adding the following line (you can get the path to your PowerShell Core executable by using Resolve-Path "$($env:ProgramFiles)\PowerShell\*\*.exe"):

Subsystem powershell C:\Program Files\PowerShell\6.0.0-beta.9\pwsh.exe -sshs -NoLogo -NoProfile

Then, restart the sshd process (that is, the ssh daemon):

Restart-Service sshd

Now we need to generate a pair of RSA keys, as follows:

ssh-keygen -t rsa -f ReplaceThisWithYourDesiredRsaKeyFileName

This generates you 2 files: one with the ending ‘.pub’, and one without. The former is the public key that you will need to distribute, and the latter is the private key.

On the remote Linux machine, you need to configure OpenSSH as well. Edit the config file /etc/ssh/sshd_config, and, similar to the above, enable the three authentication methods (PasswordAuthentication, RSAAuthentication, and PubkeyAuthentication). Adding the subsystem has a slightly different syntax:

Subsystem powershell /usr/bin/pwsh -sshs -NoLogo -NoProfile

Then append the content of the public key that you generated before to the .ssh/authorized_keys file, and optionally create a folder and set the correct permissions. The following lines take care of everything, and all you need to do is insert the path to your public key file.

mkdir -p .ssh

chmod 700 .ssh

cat PathToPublicKeyOfSpecificWindowsMachineToAllowPasswordLessRemoting.pub >>

.ssh/authorized_keys

chmod 640 .ssh/authorized_keys

sudo service sshd restart

Now open PowerShell Core, and let’s test remoting the first time by using the new parameter set of ‘Invoke-Command’ for OpenSSH remoting:

Invoke-Command -ScriptBlock { “Hello from $hostname)” } -UserName $remoteMachineLogonUserName -HostName $IpAddressOfRemoteMachine -KeyFilePath $PathToPrivateRsaKeyFile

The first time you run this command, you will be prompted to confirm that you trust the connection. Choose ‘yes’, and this will add the connection to the known_hosts file. Should your remoting client get locked down after the first configuration, you can make it add a new machine to the known_hosts file via the command line, by using:

ssh -o StrictHostKeyChecking=no username@hostname

You will have noticed that you also needed to specify the full path to the private RSA key, which is a bit annoying. We can get rid of that parameter, however, by using:

ssh-add.exe $PathToPrivateRsaKeyFile

One important note is that this command and the RSA key file generation command have to be executed as the user who will execute the PowerShell remoting commands. That is, if you want your co-workers or the build agent account to be able to use PowerShell OpenSSH remoting, you need to configure the public and private keys both on the client and host side for every user.

If you want to set up remoting in other configurations of Windows/Linux as client/host, the process is very similar. There is a lot of documentation already out there, especially on the Linux side.

Wrap up OpenSSH remoting in Windows PowerShell

Now that we solved the remoting problem, let’s write a wrapper so that we can use PowerShell Core from Windows PowerShell, which will run on the build agent. The first problem to be solved is hopping into PowerShell Core from a Windows PowerShell task on the Windows build agent:

<#

.Synopsis

Looks for the latest pre-release installation of PS6, starts it as a new process and passes the scriptblock to be executed.

.DESCRIPTION

The returned result is an output string because it is a different process. Note that you can only pass in the string value of variables but not the objects themselves.

In order to have in the passed in scriptblock, use [scriptblock]::Create("Write-Output $stringVariablefromouterScope; `$variableToBeDefinedHere = 'myvalue'; Write-Host `$variableToBeDefinedHere")

.EXAMPLE

Invoke-CommandInNewPowerShell6Process ([scriptblock]::Create("Write-Output $stringVariablefromouterScope; `$variableToBeDefinedHere = 'myvalue'; Write-Host `$variableToBeDefinedHere"))

#>

Function Invoke-CommandInNewPowerShell6Process

{

[CmdletBinding()]

Param

(

[Parameter(Mandatory=$true)]

[scriptblock]$ScriptBlock,

[Parameter(Mandatory=$false)]

$WorkingDirectory

)

$powerShell6 = Resolve-path "$env:ProgramFiles\PowerShell\*\*.exe" | Sort-Object -Descending | Select-Object -First 1 -ExpandProperty Path

$psi = New-object System.Diagnostics.ProcessStartInfo

$psi.CreateNoWindow = $true

$psi.UseShellExecute = $false

$psi.RedirectStandardOutput = $true

$psi.RedirectStandardError = $true

$psi.FileName = $powerShell6

$psi.WorkingDirectory = $WorkingDirectory

# To pass double quotes correctly when using ProcessStartInfo, one needs to replace double quotes with 3 double quotes". See: https://msdn.microsoft.com/en-us/library/system.diagnostics.processstartinfo.arguments(v=vs.110).aspx

$ScriptBlock = [scriptblock]::Create($ScriptBlock.ToString().Replace("`"", "`"`"`""))

if ($powerShell6.contains('6.0.0-alpha'))

{

$psi.Arguments = $ScriptBlock

}

else

{

$psi.Arguments = "-noprofile -command & {$ScriptBlock}"

}

$process = New-Object System.Diagnostics.Process

$process.StartInfo = $psi

Write-Verbose "Invoking PowerShell 6 $powerShell6 with scriptblock $ScriptBlock"

# Creating string builders to store stdout and stderr.

$stdOutBuilder = New-Object -TypeName System.Text.StringBuilder

$stdErrBuilder = New-Object -TypeName System.Text.StringBuilder

# Adding event handers for stdout and stderr.

$eventHandler = {

if (! [String]::IsNullOrEmpty($EventArgs.Data)) {

$Event.MessageData.AppendLine($EventArgs.Data)

}

}

$stdOutEvent = Register-ObjectEvent -InputObject $process `

-Action $eventHandler -EventName 'OutputDataReceived' `

-MessageData $stdOutBuilder

$stdErrEvent = Register-ObjectEvent -InputObject $process `

-Action $eventHandler -EventName 'ErrorDataReceived' `

-MessageData $stdErrBuilder

[void]$process.Start()

# begin reading stdout and stderr asynchronously to avoid deadlocks: https://msdn.microsoft.com/en-us/library/system.diagnostics.process.standardoutput%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396

$process.BeginOutputReadLine()

$process.BeginErrorReadLine()

$process.WaitForExit()

Unregister-Event -SourceIdentifier $stdOutEvent.Name

Unregister-Event -SourceIdentifier $stdErrEvent.Name

$stdOutput = $stdOutBuilder.ToString().TrimEnd("`r", "`n"); # remove last newline in case only one string/line gets returned

$stdError = $stdErrBuilder.ToString()

Write-Verbose "StandardOutput:"

Write-Output $stdOutput

If (![string]::IsNullOrWhiteSpace($stdError))

{

# 'Continue' is the default error preference

If ($ErrorActionPreference -ne [System.Management.Automation.ActionPreference]::Continue)

{

Write-Output "StandardError (suppressed due to ActionPreference $ErrorActionPreference): $stdError"

}

else

{

Write-Error "StandardError: $stdError"

}

}

Write-Verbose "PowerShell 6 invocation finished"

}

The function above is complex because of using the ProcessStartInfo .NET class, to be able to retrieve stderr and stdout without deadlocks, and to pass double quotes correctly to it. I decided not to use Start-Process, because this cmdlet writes to disk for capturing stderr and stdout.

Execute platform-specific commands

Using PowerShell Core remoting, we can now start writing PowerShell code that can be executed from any platform on any other platform. You don't even need to know what the remote platform is! It’s like Xamarin for PowerShell. However, sometimes you will want to do something very specific on a certain platform (for example, I decided to fall back use WinRM-based remoting for Windows hosts, but also needed to execute commands as ‘sudo’ on Linux). So, I first needed to figure out what type of platform the remote machine is, which I did by using TTL (TimeToLive) values. It might not be the ideal method, but it worked reliably for me and was fast to implement. It is based on the fact that Linux systems have TTL values around 64 ms and Windows has TTL values around 128ms. It should work for most modern and commonly used operating systems, but I am sure there are special cases where it does not. So just experiment to see what works for you.

Enum OS

{

Linux = 1

Windows = 2

}

Function Get-OperatingSystemOfRemoteMachine

{

[CmdletBinding()]

Param

(

$remoteHost

)

[int]$responseTimeToLive = Test-Connection $remoteHost -Count 1 | Select-Object -ExpandProperty ResponseTimeToLive

$os = [System.math]::Round($responseTimeToLive/64) # TTL values are not 100% accurate -> round to get a range of +/-32

if($os -eq 1) #Linux (TTL should be around 64

{

return [OS]::Linux

}

elseif($os -eq 2) #Windows (TTL should be around 128)

{

return [OS]::Windows

}

else

{

Throw "OS of remote machine $remoteHost could not be determined by TTL value. TTL value was: $responseTimeToLive"

}

}

Execute commands as sudo

Armed with this knowledge, we can now make platform-specific decisions, and, for example, build up our scriptblocks. But how can we execute sudo commands? PowerShell Core itself supports native Linux commands when executed locally, but executing commands by using sudo rights remotely is not fully baked yet (see the tracking issue). So, putting ‘sudo whoami’ in your ScriptBlock will give you an error. But I found a workaround, which is based on the fact that the sudo password can be piped into sudo using the -S option. Therefore, the following command works, executed remotely:

echo InsertSudoPasswordHere | sudo -S whoami

Yes, you need to be careful about security here, but depending on your use case, this might be OK.

Practical tips

Most of the remoting is based on scriptblocks. You can inject variables (as a string) into it by using the scriptblock constructor, but also take care to escape characters if you want to use variables:

[scriptblock]::Create(“Write-Host $variableNameThatIsDefinedOnTheClient”; `$meaningOfTheUniverseAndEverything = 40+2; Write-Host `$ meaningOfTheUniverseAndEverything”)

Should your code get more complex, then I suggest defining a PowerShell function that takes a PSSession as an argument. This is because you can also create a PSSession by using the new parameter set shown above. The idea is that all the scriptblock does is re-import the necessary modules, and then execute a top-level function that takes a PSSession:

$myscriptBlock = [scriptblock]::Create("Import-Module $FullPathToMyRequiredModule; Invoke-MyComand -PSSession `$session”)

$scriptBlockToCreateSession = [scriptblock]::Create("`$VerbosePreference = '$VerbosePreference'; `$session = New-PSSession -HostName $HostName -UserName $UserName")

$scriptBlockMain = [scriptblock]::Create("$scriptBlockToCreateSession; Invoke-Command -ScriptBlock { $ScriptBlock } -Session `$session;")

The above example also shows how to correctly propagate the $VerbosePreference, which Invoke-Command currently does not do (see this GitHub issue for tracking).

In our builds, we need to copy our deliverables to our system under test, but I did not want the deployment/installation scripts to be platform specific. I needed to solve problems such as finding a common path. I sniff the home directory, and then create the path on the remote machine:

$homeDirectoryOnRemoteMachine = Invoke-Command -Session $Session -ScriptBlock { (Get-Location).Path }

$destinationPathLocalToRemoteMachine = [System.IO.Path]::Combine($homeDirectoryOnRemoteMachine, $FolderNameOnRemoteMachine)

Conclusion

We have seen several useful pieces that you can wire together for your needs, which could be:

  • Setting up OpenSSH remoting, without passwords, to be able to use it for CI purposes, for example.
  • Calling PowerShell Core from Windows PowerShell. This could also be used for CI machines, for example, or for convenience to do cross-platform remoting from Windows PowerShell.
  • Determining the operating system type of a remote machine to decide whether an OpenSSH or WinRM PSSession should be created. I have used this to write an Invoke-CommandCrossPlatform cmdlet that also wraps the complex logic of concatenating various scriptblocks.
  • Overcoming current limitations of OpenSSH remoting, to execute remote commands as sudo.

If you have any questions, suggestions, or want to share your experience, comment below, or feel free to contact me.

Christoph Bergmeister, guest blogger

PowerShell and the REST API for the IT pro

$
0
0

Summary: This post provides a quick introduction to what the REST API is, and how it applies to Windows PowerShell.

Q: Hey, Scripting Guy!
I can see there is this cool cmdlet called Invoke-RestMethod. I've been told REST API's are all around, and this allows me to consume that data. Could you give me a hand getting started?
—SH

A: Hello SH,
Glad to help out! I remember hearing about REST APIs the first time, thinking they might be a way to take a nap at work. Was I wrong on that one!

What a "REST API" is at the most BASIC level is really just a very fancy web Endpoint. You could, if you were really creative, type in everything you need to connect to one in your browser. But that wouldn't be a very productive use of time.

What REST stands for is "Representational State Transfer." It’s a very connectionless protocol. This means it shouldn't care if there is a temporary break in the internet.

You can connect, ask it a question, and even in some cases send data. It will think about that question and can return content back (if so designed).

Generally, when you are contacting a REST API, you will need to provide some information. You also need to understand the "buzzwords" when you're reading documentation for a REST Endpoint.

A URI or Endpoint

This will be an HTTP or HTTPS endpoint. It could be as detailed as this:
https://speech.platform.bing.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1

Or it could be as simple as this:
https://blogs.msdn.microsoft.com/powershell/feed/

Method

In all cases, you will be providing a "method." This is similar to the verb in PowerShell. With REST, there are a few pretty common ones like PUT, GET, or POST. There are others like DELETE and PATCH. Which method you use is defined by the documentation of the owner of the REST API.

Authentication

Some REST API's will not require authentication. A weather one might be an example, since no critical data is passing over the wires.

A REST API hosted by a Human Resources application would more than likely prefer authentication. They would need to know who is accessing that data, as part of its control mechanism.

Authentication could be a regular authentication pop-up for an ID and password. It could also be something like an access token, a temporary key generated initially and used for short term access uses.

Headers and the body

Headers and the body contain parameters and data we need to send up to the API. A good example of a header parameter might be the UserAgent string to identify your browser to the API. The body could be the raw data you need sent to a Translation API.

Knowing how these values can be consumed by Windows PowerShell, and how you can find which ones to use, are the trick to using a REST API.

For some excellent examples that we are going to work with in upcoming articles, see the Azure Cognitive Services REST API.

When we are building values for a header in PowerShell for Invoke-RestMethod, the format will look like this for the most part:
@{'Valuename' = 'SomeValue' }

An example you will see early on is passing the header needed for the authentication component of the REST API. It will look like this:
$Header=@{'Ocp-Apim-Subscription-Key' = $APIKey }

Or, a more complex one would look like this:
$Header=@{ `
'Content-Type' = 'application/ssml+xml'; `
'X-Microsoft-OutputFormat' = $AudioOutputType; `
'X-Search-AppId' = $XSearchAppId; `
'X-Search-ClientId' = $XSearchClientId; `
'Authorization' = $AccessToken `
}

Another hint you can use to learn what a REST method wants will be examples of the "Responses" documented for REST API's. Take a look at the following example:
POST /synthesize
HTTP/1.1
Host: speech.platform.bing.com

X-Microsoft-OutputFormat: riff-8khz-8bit-mono-mulaw
Content-Type: application/ssml+xml
Host: speech.platform.bing.com
Content-Length: 197
Authorization: Bearer [Base64 access_token]

<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)'>Microsoft Bing Voice Output API</voice></speak>

Reading down line by line, you can see this particular operation is calling for a "POST" method. If you read the documentation on this particular function, you would notice that Content-Type is an actual value beyond supplied, as was X-Microsoft-OutputFormat.

Over the next few articles, we will be using PowerShell to consume the Azure Cognitive Services Text to Speech API, by using Invoke-RestMethod. My hope is that not only will you learn something cool, but you'll have a bit of fun having Azure talk for you.

Stay tuned until next time, when we look at Azure Cognitive Services and getting some basic authentication happening for our little project.

I invite you to follow “Hey, Scripting Guy!” on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

 

 

PowerTip: Use PowerShell to read an RSS feed

$
0
0

Summary: “Hey, Scripting Guy!” shows you how to use Invoke-RestMethod to read a list of entries from an RSS feed.

 How can I use Windows PowerShell to see the list of articles from an RSS feed?

      Just use the Invoke-RestMethod and provide the full path to the link to the RSS feed. Here is an example:

   Invoke-RestMethod -Uri 'https://blogs.technet.microsoft.com/heyscriptingguy/rss.aspx

 

Windows PowerShell and the Azure Text-to-Speech Rest API (Part 1)

$
0
0

Summary: You can use Windows PowerShell to authenticate to the Microsoft Cognitive Services Text-to-Speech component through the Rest API.

Q: Hey, Scripting Guy!

I heard about the cool Microsoft Cognitive Services, and had heard they have a REST API. Does that mean I can use PowerShell to consume them? Could you show me how to authenticate to it?

—SH

A: Hello SH,

I just love waking up and saying "YES YOU CAN!" when people ask "Can you do that with Windows PowerShell?"

So… ahem…. "Yes you can!"

For those who didn't know, Cognitive Services are hosted in the Azure cloud, and they allow you to many things easily. With very little work in PowerShell or any other programming language, we can moderate content visually.

We can easily use it to search the internet for content or, like we'll do over the next series of articles, make use of the Text-to-Speech component.

First, sign up for the trial of Cognitive Services, which is part of the Azure Subscription.

If you don't already have an Azure account, select FREE ACCOUNT to get yourself started. We'll wait for you to finish.

Once that process is done, you'll see some selections. Select Login to authenticate and add the trial. Then choose Speech to get a trial started with the Speech API.

Next, the most important button to select is Get API Key, beside the Bing Speech API. This will generate the application keys you'll need to talk to the API.

Screenshot of Get API Key

IMPORTANT: If you already have an Azure subscription, and just want to get going under production (that is, not the trial), the steps are different. Please follow these instructions if you do not want to use the trial version, or you want to move to production from your trial.

"But…but Scripting Guy!" I can hear you say. "This is a PowerShell article…. Where's the PowerShell?"

Well I was wondering when you were going to ask. I like to show both halves of the solution whenever I can.

First, you'll need the updated AzureRM cmdlets from the PowerShell gallery to make sure you have the AzureRM.CognitiveServices module. If you haven't done this before, just run the following cmdlet to get the complete set of cmdlets for managing Azure:

Install-Module AzureRM

If you are running an older version and need to update, just run this:

Update-Module AzureRM

You can confirm if the new module is available by running this:

Get-Module -ListAvailable AzureRM.CognitiveServices

On my system, I have a total of seven new cmdlets available to me for provisioning and managing these resources.

Screenshot of new cmdlets

We would first authenticate by using the Login-AzureRMAccount cmdlet. Once connected, we can examine the properties of that newly created resource.

To list all Cognitive Service accounts we've created, we can run the following cmdlet. In this example, we have only the one created.

Get-AzureRMCognitiveServicesAccount

Screenshot of account created

But if we'd like to re-create this, we can store and grab the properties by targeting the name of the account and the resource group it's stored within. We will place this within the object called $Account.

$Account=Get-AzureRmCognitiveServicesAccount -Name 'HSG-CognitiveService' -ResourceGroupName 'HSG-ResourceGroup'

We'll need some data from this object we captured, to rebuild or duplicate this process in the future.

To begin with, let's obtain the first two: Location and the AccountType. The location should not to be mistaken with Azure datacenter locations.

We can obtain this from the object we created earlier, called $Account. It is stored within the property called Location, as seen in the preceding screenshot. Its value is global.

$Location=$Account.Location

An additional value we'll need is the type of account we created. In our case, it was a connection to Bing.Speech. You can see this attached to the AccountType property, which we will obtain from the $Account object we created earlier.

$AccountType=$Account.AccountType

Another property we'll need to obtain is the list of objects Azure uses to identify the pricing tiers. To find this, we can pipe the $Account object into the Get-AzureRMCognitiveServicesAccountSkus cmdlet. We will need to grab the values property specifically from this object, and expand it.

$Skus=$Account | Get-AzureRmCognitiveServicesAccountSkus | Select-Object -expandproperty Value

If you examine the newly created object, you'll see results that don’t seem all that useful:

Screenshot of object results

However, we can do a little PowerShell magic, and expand the Sku property further by doing this:

$Sku.Sku

Screenshot of expanded property

In our case, we only need worry about the SKU used from the object we most recently created. Our goal is simply to duplicate the process. We can access the SKU directly again from the $Account object.

$Account.Sku

This will give us output similar to what we saw before (when we grabbed all the SKUs). In our case, to rebuild the resource, we only need the property called Name.

$Account.Sku.Name

To capture any of these properties, and avoid re-typing, we can just pipe to Set-Clipboard:

$Account.Sku.Name | Set-Clipboard

To re-create this resource from a new Azure subscription, we would just run the following script (after logging in of course):

# AzureRM Resource Group

$ResourceGroup='HSG-ResourceGroup'

# Azure Cognitive Services Account SKU

$Sku='F0'

# Azure Cognitive Services Account Type

$AccountType='Bing.Speech'

# Unique Name to our Azure Cognitive Services Account

$AccountName='HSG-Speech Account'

New-AzureRmCognitiveServicesAccount -ResourceGroupName 'HSG-ResourceGroup' -Name 'HSG-AzureRMSpeech' -Type Bing.Speech -SkuName F0 -Location 'global' -force

Be sure to visit again as we start to look into how to leverage this amazing resource by using the REST API!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Sean Kearney, Premier Field Engineer

Enterprise Services Delivery, Secure Infrastructure

 

 


PowerTip: Create new authentication keys for AzureRM Cognitive Services

$
0
0

Summary: Change the keys to authenticate to Azure RM Cognitive Services, by using Windows PowerShell.

  Hey, Scripting Guy! I created the keys for my Rest API. I know I can change them in the web portal, but is there a faster way of doing it through Windows PowerShell?

  There absolutely is! Just use the New-AzureRMCognitiveServicesAccountKey cmdlet to reset either Key1 or Key2 (or both). Here is an example, where we generate a new sequence for Key1:

New-AzureRMCognitiveServicesAccountKey -ResourceGroup 'HSG' -Name 'Sample' -KeyName Key1

Windows PowerShell and the Text-to-Speech REST API (Part 2)

$
0
0

Summary: You can use Windows PowerShell to authenticate to the Text-to-Speech REST API.

Q: Hey, Scripting Guy!

I was reading up on the REST API for the Text-to-Speech component of Cognitive Services. I'm just starting to learn how to use REST and PowerShell. Could you spend a little bit of time and show me how to authenticate to the service?

—SH

A: Hello SH,

Authentication is one of the biggest pieces you'll want to learn. Telling a system at the back end who you are, and knowing how to communicate with it, is very critical before you can do anything fun!

Talking to the Text-to-Speech API is pretty easy once you know the basics. If we were to pretend the web service was a door with a really cool VIP in the back and a bodyguard watching it, the interaction might go like this:

Knocking on the door with no credentials or invalid credentials.

"Hello, I want to walk in."

Strange look on the bodyguard's face, and he says "Huh?" or "Grrrr" (or roars like Chewbacca). That would be your error code.

You can keep doing this as many times as you like, and you're still not going into that door.

But provide the correct "secret phrase" (or, in our case, key) and the interaction goes like this:

"Hello, I want to walk in. The secret phrase is 'crustless frozen peanut butter sandwiches'."

The bodyguard looks at the list, sees that secret phrase beside your name, and nods. He then calls up on his two-way radio and gets a new, second secret phrase, with instructions to a second door.

Now you meet the second bodyguard, who is much meaner than first one. (Missed getting coffee that morning I suppose.) This one wants your second phrase, and after validating it's good, the second bodyguard starts a stopwatch.

You can run in and out of the door to do what you need with that new phrase, but after a certain amount of time he scratches that phrase off the list.

So, you run back to the first body guard, hand him your first passphrase, and the process repeats until you're done for the day.

That's pretty much what the authentication piece for the REST API does, and how it works.

We talk to a REST API, and pass it one of the keys we generated in the last article. The REST API generates a token, which is a long string of characters, and you need to use that token with the second REST API. This token is only good for a short term, and you need to go back and request a new one every so often.

Let's get some basics together to make this happen.

This first "door" is an endpoint to the REST API that handles the authentication. The documentation on the use of this endpoint can be found under the authentication header at Bing Text-to-Speech API.

We are provided the following information immediately:

POST https://api.cognitive.microsoft.com/sts/v1.0/issueToken

Content-Length: 0

From this information, we can see we need to use a POST method, and the endpoint is https://api.cognitive.microsoft.com/sts/v1.0/issueToken.

Let's start to build that into some Windows PowerShell variables.

 

# Rest API Method

$Method='POST'

 

# Rest API Endpoint

$Uri=' https://api.cognitive.microsoft.com/sts/v1.0/issueToken'

 

The next piece we need to supply is the header information. We need to pass our key to a value named Ocp-Apim-Subscription-Key.

The value of the key is one of the two authentication keys you produced last time, when you initially created the Cognitive Services account for Bing.Speech. I'm going to use a fictitious one for our example.

Here, we'll populate the header. The header in this case is pretty simple, containing only one value.

 

# Authentication Key

$AuthenticationKey='13775361233908722041033142028212'

 

# Headers to pass to Rest API

$Headers=@{'Ocp-Apim-Subscription-Key' = $AuthenticationKey }

 

We then call up the REST endpoint directly, to see if everything worked.

Invoke-RestMethod -Method $Method -Uri $Uri -Headers $Headers

 

If it worked properly, and everything was formatted the way it should be, the output would be something similar to the following. This is your token for the temporary access to the second endpoint.

Screenshot of token

We would then modify our Invoke-RestMethod to be captured to a PowerShell object, so we can reuse it later.

 

# Get Authentication Token to communicate with Text to Speech Rest API

$Token=Invoke-RestMethod -Method $Method -Uri $Uri -Headers $Headers

 

On the other hand, if you didn't supply a valid authentication key, you would get this:

Screenshot of error message

So, with this knowledge, we can even trap for this in our script.

Try

{

[string]$Token=$NULL

# Rest API Method

[string]$Method='POST'

# Rest API Endpoint

[string]Uri='https://api.cognitive.microsoft.com/sts/v1.0/issueToken'

# Authentication Key

[string]$AuthenticationKey='13775361233908722041033142028212'

# Headers to pass to Rest API

$Headers=@{'Ocp-Apim-Subscription-Key' = $AuthenticationKey }

# Get Authentication Token to communicate with Text to Speech Rest API

[string]$Token=Invoke-RestMethod -Method $Method -Uri $Uri -Headers $Headers

}

Catch [System.Net.Webexception]

{

Write-Output 'Failed to Authenticate'

}

Now we know how to get a token to communicate with the Bing.Speech API. Pop in again next time, when we'll show you how to start putting the building blocks together to use the service!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Sean Kearney, Premier Field Engineer, Microsoft

Frequent contributor to Hey, Scripting Guy! 

 

 

PowerTip: Build simple HTML with PowerShell

$
0
0

Summary: Here’s how to use the ConvertTo-HTML cmdlet to build basic HTML content.

  Hey, Scripting Guy! Occasionally I need to build basic HTML documents. I heard there was a way to do that with Windows PowerShell.

  There most certainly is! Just use the ConvertTo-HTML cmdlet to save the day! For example:

                          $SampleDoc=@'

                          This is a simple text Document in PowerShell

                          That I am going to make into a Tiny web page

                          🙂

                          '@

                          ConvertTo-Html -InputObject $SampleDoc

              

Introducing the DscLcm utility for PowerShell

$
0
0

Summary: Desired State Configuration is a great deployment tool to meet your organization’s infrastructure-as-code goals. I recently came across a situation for a project that uses the Push Service (as opposed to the Pull Service). It required me to be able to apply a new partial configuration to a node, without any knowledge of what partial configurations were already in place. This led to the development of the DscLcm module, which solved that problem for my team.

DscLcm module

The DscLcm PowerShell module is a utility that interacts with the Local Configuration Manager (LCM) of a target machine. You don’t have to define your configuration as a script block. This functionality is useful in both push and pull scenarios. But the full benefit comes from a push model, because a lot of this functionality is built into the pull model already.

My goal with this module is to provide standard PowerShell functions that allow you to interact with the LCM. At the moment, DscLcm allows you to:

  • Modify an LCM Setting (for example, RebootNodeIfNeeded).
  • Add a partial configuration to the LCM.
  • Remove a partial configuration from the LCM.
  • Modify properties of an existing partial configuration block on the LCM.
  • Reset the LCM to a default state.

Before now, you would have to define an LCM setting in a configuration script block, as seen here:

[DSCLocalConfigurationManager()]

configuration LCMConfig

{

Node localhost

{

Settings

{

RefreshMode = 'Push'

}

}

}

With the new DscLcm PowerShell module, this same setting can be applied with the following command:

Set-LcmSetting -RefreshMode Push

This format is more conventional for working directly with the LCM, versus having to set up an entire configuration block for potentially only one setting change. In the following example, notice that modifying the few settings with the Set-LcmSetting command did not alter any of the already existing settings!

Screenshot of PowerShell

As I mentioned, one of the cool features of DscLcm is that it gives you the ability to append a DSC partial configuration to an LCM, without losing any of its current settings. Traditionally, one would have to re-define all the partial configurations and other LCM settings in the LCM configuration block, before deploying the resulting Managed Object Format (.mof) file. The main benefit of this functionality is that it gives you the ability to apply a new partial configuration, without having to know what partial configurations are already on the target.

In the following example, suppose that the localhost LCM configuration already knows about a partial configuration called ‘ServiceAccountConfig’. In order to apply a new partial to that LCM, you would have to define both ‘ServiceAccountConfig’ and the new partial, ‘SharePointConfig’, in the meta configuration.

[DSCLocalConfigurationManager()]

configuration PartialConfigDemo

{

Node localhost

{

PartialConfiguration ServiceAccountConfig

{

Description = 'Configuration to add the SharePoint service account to the Administrators group.'

RefreshMode = 'Push'

}

PartialConfiguration SharePointConfig

{

Description = 'Configuration for the SharePoint server'

RefreshMode = 'Push'

}

}

}

PartialConfigDemo

With DscLcm, this same function can be performed with the following command:

Add-LcmPartialConfiguration `

-PartialName SharePointConfig `

-RefreshMode Push `

-Description 'Configuration for the SharePoint server'

Screenshot of PowerShell

For the exact opposite scenario, we can also remove individual partial configurations by name. In addition to removing the partial configuration object, this function will also remove any dependencies on that partial as well. The next time the consistency check runs, the LCM will also automatically remove the partial configuration .mof for you.

Remove-LcmPartialConfiguration -PartialName ServiceAccountConfig

Screenshot of PowerShell

For those times when you have a defined partial configuration on a target and just want to adjust one of its settings, you can modify those settings as follows:

Set-LcmPartialConfiguration `

-PartialName SharePointConfig `

-DependsOn "ServiceAccountConfig" `

-Description "New Description"

Screenshot of PowerShell

The last cmdlet in this version of the module lets you reset the LCM to a blank state. This comes in handy for just about any scenario when you need to scrap a configuration altogether.

Reset-LcmConfiguration

Screenshot of PowerShell

As you can see, these functions greatly reduce the overhead for defining your LCM settings in any environment. Keep in mind, this is only one step in the DSC publishing process. Even though we are adding a partial configuration to the LCM, we still need to publish a partial configuration .mof in order for the full process to be completed. I have found these functions to be very handy as I work with DSC, and I hope you will too. Please feel free to leave any feedback or suggestions at either of the links below.

The module can be installed directly with PowerShell, by using the PowerShell Gallery repository with:

https://github.com/aromano2/DscLcm

https://www.powershellgallery.com/packages/DscLcm

 

Anthony Romano

Consultant, Microsoft

Get certificate info into a CSV by using PowerShell

$
0
0

Summary: Certificate management is always challenging. Let’s explore how to use PowerShell to export local certificate information to a comma-separated values (CSV) file on Windows 7 (or later) computers.

Q: Hey, Scripting Guy!

How can I get all my certificate info into a CSV on my Windows computers?

—SH

A: Hello SH,

Patrick Mercier here, with my first “Hey, Scripting Guy!” post. This question has come up at multiple customer sites, as they plan a new PKI infrastructure or a revamp of their current one!

There’s tons of resources on using PowerShell for querying certificates, but questions around finding expiring certificates, self-signed certificates, or certs issued by specific roots keep coming up when I meet with customers. My current customer needed to find self-signed certificates, so we took this local scan example and wrapped it in Invoke-Parallel to scan targeted systems! Thanks to Joel Mueller, a fellow Premier Field Engineer (PFE) at Microsoft who got me started on this, and to the rest of the “Hey, Scripting Guy!” community for providing a starting point.

As I’m sure you’ve seen in other posts here, the whole thing starts with the Get-ChildItem cmdlet.  At its most basic level, the following command lists all the certificates on your local system:

Screenshot of PowerShell

Let’s break it down:

  • We’re asking for the child items of the certificate branch of the local machine (Get-ChildItem -path Cert:\LocalMachine). “Wait a minute!” you say. “I’ve only ever used the Get-ChildItem cmdlet with a file path to get a list of files and folders. Where do you get this cert:\localmachine business?” Simply put, this “path” is available due to the presence of a PowerShell Provider. For more info, check out Ed’s post: Find and use Windows PowerShell Providers. But the basics for today are that in providing CERT: as the path, I’m calling on the certificate provider in order to access specific information on my system.
  • We’re doing this recursively (-Recurse), to get every child object below this point.
  • We’re filtering out the containers (where-object {$_.PSIContainer -eq $false}).
  • We’re ensuring that we’re grabbing all the attributes available (Format-List -Property *).

Running this command displays all the certificates installed on your local system, conveniently including a list of available attributes:

Screenshot of PowerShell

This example shows the GlobalSign Root CA in the root store of my machine. You should be able to find this cert on your system too. Alternatively, if you like doing things the hard way, you can bring up an MMC, load the certificates snap-in, and browse to the trusted root store. There you can find the GlobalSign Root CA – R1 certificate, and then copy each attribute value to Excel.

You would think that piping that command to a CSV would make for a happy day, wouldn’t you? Sadly, not so. Directly outputting this by using Export-CSV doesn’t give us the expected result.

Getting it all into a format we can manipulate is going to take a bit more effort. Enter the array and the PSObject.

So, my script now starts with defining an empty array, conveniently called $array.

Now, we see the familiar Get-ChildItem command. But instead of piping it directly out by using Export-CSV, we’ll use the foreach-object loop, and break down the output. Ultimately, what this does is:

  • Create a new PSObject for each certificate found by the get-childitem cmdlet. Think of the PSObject as a row inside your data table or, ultimately, your Excel sheet. (New-Object -TypeName PSObject)
  • Add the value of our selected attributes into “columns”. In this case, PSPath, FriendlyName, Issuer, NotAfter, NotBefore, SerialNumber, Thumbrint, DNSNameList, Subject, and Version are all populated. (Add-Member –MemberType NoteProperty -Name “%attrib%” -Value $_.%attrib%)
  • Add the object to your array as a new row. ($array += $obj)
  • Clear out the object, so that no data carries over on the next iteration of the loop. ($obj=$null)
  • Export your array to your CSV. (Export-Csv)

Screenshot of code

As you see, we can then pipe our array out to the CSV file.

If all went well, you now have a CSV that contains the certificate information on your local machine! I would not be surprised if, after having done this, you discover expired certificates on your system. I’ll leave it to you to find the well-known one that I keep finding.

If the attributes included above don’t meet your needs, you can easily add (or remove) one from the loop simply by inserting an additional Add-Member line. Say you decide you need to include the PSProvider attribute. Simply insert the following above the $array += $obj in the loop:

$obj | Add-Member -MemberType NoteProperty -Name "PSProvider" -value $_.PSProvider

To see what attributes are available, run the first command provided above, and read the output!

I suspect that many of you will want to see how to scale this to scanning remote systems, so watch for a future post that will do just that.

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum.

Patrick Mercier, Premier Field Engineer

Microsoft

Viewing all 117 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>