Quantcast
Channel: Scripting Blog
Viewing all 117 articles
Browse latest View live

Hey, Scripting Guy! How Can I Remove All the Network Printers on a Computer?

$
0
0
Hey, Scripting Guy! Is it possible to delete or remove all network printers from a workstation with a VBScript script, similar to the way you removed all the network drives on a computer? -- WDC Hey, WDC. If you’ve been following our daily TechEd preview...(read more)

Hey, Scripting Guy! How Can I Document Printers?

$
0
0
Hey, Scripting Guy! I do not know about you, but does it seem that printers are a major pain in the dark side? I mean, they seem to multiply faster than bacteria. They are all just different enough to cause maintenance problems, and forget about trying...(read more)

Hey, Scripting Guy! How Can I Document Printer Usage on Client Workstations?

$
0
0
Hey, Scripting Guy! I am tasked with the problem of documenting printer use on our client workstations. I must be able to determine whether someone prints directly to a printer, or if they are printing through our print servers. This is important because...(read more)

Hey, Scripting Guy! How Can I Use WMI to Add a Printer Connection by Using Windows PowerShell?

$
0
0
Hey, Scripting Guy! I am trying to determine how to use WMI to add a printer connection by using Windows PowerShell. I have found this script , but it uses VBScript and not WMI. This means that it is not very close at all, I am afraid. Do you have a WMI...(read more)

Hey, Scripting Guy! How Can I Use Windows PowerShell to Remove Old Printer Connections, List Printers, and Set New Default Printers?

$
0
0
Hey, Scripting Guy! Well it finally happened. Our budget request for new printers was approved. We have gotten them unboxed, distributed, and hooked up to the LAN. We have even downloaded the latest drivers from the Internet, and updated the firmware...(read more)

Hey, Scripting Guy! How Can I Perform More Than One Action with the Win32_Printer WMI Class?

$
0
0
Hey, Scripting Guy! The Win32_Printer WMI class has several methods that are listed in it. But I am not sure how to best use these methods from inside a Windows PowerShell script. I do not want to create four different scripts to send a test page, clean...(read more)

Use PowerShell to Create New Printer Ports

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell 3.0 to create new printer ports in Windows 8.

Microsoft Scripting Guy, Ed Wilson, is here. One of the exciting things that is happening around the Scripting House is the appearance of new Windows PowerShell Saturday events. We have new events coming up in Atlanta, Singapore, and Charlotte. For information about these and other events, check out my site, Scripting Community. If you do not know what Windows PowerShell is, check out my blog post, Community: All about PowerShell Saturday.

To programmatically create a working printer, there are at least three steps:

  1. Create the printer port.
  2. Install the printer driver.
  3. Install the printer (by using the printer port and the printer driver).

Today I am talking about creating the printer port.

Using PowerShell to work with printer ports

Before I create anything, I like to know what I have going on with my computer. I can use the Get-PrinterPort function to list existing printer ports on my local computer:

Get-PrinterPort

I can also use this function to retrieve printer port information from a remote server running Windows Server 2008 and Windows PowerShell 3.0 as shown here:

Get-PrinterPort -ComputerName dc1

The commands and the output from the commands are shown in the following image.

Image of command output

Adding a new printer port

To add a new printer port, I use the Add-PrinterPort function in Windows 8 or Windows Server 2012. By using the Add-PrinterPort function, I can add a local printer port, a TCP printer port, or an LPR printer port.

Most of the time, if I am creating a local printer port, I want to print directly to a printer on the network. Doing this bypasses a print server. Therefore, in the case of large print jobs, I lose flexibility because my laptop must remain on to manage the large print job. But for short documents, it is fast. Also by printing directly to the printer, I can configure things the way that I want.

By using Windows PowerShell, it is easy to create a TCP printer port. I use the Add-PrinterPort function, create a name for the port (the name does not matter, but it is best to use something that makes sense in the printing context). The IP address of the printer itself becomes the value for the PrinterHostAddress parameter. Here is the command I used:

Add-PrinterPort -Name 'HP_Direct:' -PrinterHostAddress '192.168.1.88'

I do not need to specify a value for the port number unless the printer is configured to use a different value than the default. The Add-PrinterPort function has four parameter sets, and I use the third one to create a TCP printer port. Here are the optional parameters for this parameter set:

Add-PrinterPort [-Name] <String> [-PrinterHostAddress] <String> [-AsJob

 [<SwitchParameter>]] [-CimSession <CimSession>] [-ComputerName <String>]

 [-PortNumber <UInt32>] [-SNMP <UInt32>] [-SNMPCommunity <String>]

 [-ThrottleLimit <Int32>] [-Confirm [<SwitchParameter>]] [-WhatIf

 [<SwitchParameter>]] [<CommonParameters>]

I have SNMP turned off on my network, so I do not need to specify a community string or any of that stuff.

When I add a printer port, I do not need an elevated Windows PowerShell console. In Windows 8, I can do this without additional rights. In addition, Windows PowerShell does not make any checks. Therefore, if the IP address is wrong or inaccessible, no warnings generate. This is shown here:

Add-PrinterPort -Name 'bogus:' -PrinterHostAddress '10.10.10.10'

The command and the output from Get-PrinterPort are shown in the following image.

Image of command output

Deleting the printer port

When I create something via Windows PowerShell, I also always like to know how to clean it up. To delete a printer port, I use the Remove-PrinterPort function. The use of the Remove-PrinterPort function is shown here:

Get-PrinterPort | Where name -eq 'bogus:' | Remove-PrinterPort

The following image shows the command and its associated output.

Image of command output

That is all there is to using Windows PowerShell to create new printer ports. Printer Week will continue tomorrow when I will talk about installing printer drivers.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Install Printer Drivers with PowerShell in Windows 8

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell in Windows 8 to install printer drivers.

Microsoft Scripting Guy, Ed Wilson, is here. This morning, it is rainy and overcast here in Charlotte, North Carolina, but it is pleasantly cool. The Scripting Wife migrated to the lanai and is sitting on her swing and checking Facebook on her Windows RT Surface. I am in my office checking email sent to scripter@microsoft.com. I am sipping a cup of English Breakfast tea with a cinnamon stick, lemon grass, hibiscus blossom, orange peel, and a bit of spearmint. It is a very refreshing cup of tea.

When it comes to using Windows PowerShell to install print drivers, there is the long way and the short way. The long way is…well…long and rather complicated. The short way is easy. The difference in the two methods is not necessarily a conscious choice, but rather a function of the drivers already installed in Windows and the print device you intend to hook up.

For example, we all know that Windows ships with a whole bunch of printer drivers already on the disk. They reside in the Windows\inf folder, and they all begin with the letters prn. The following script lists the printer drivers that ship with Windows.

Get-ChildItem ((Get-Item Env:\systemroot).value+"\inf") -Exclude *.pnf -recurse |

Where-Object { $_.name -match "prn" } |

Sort-Object -Property name |

format-table -Property name, length, creationTime, lastWriteTime -AutoSize

Of course, one issue is a bit convoluted. The following image illustrates the output.

Image of command output

The issue is that the names, such as prnbrcl1.inf, do not make too much sense. I can go to the Windows/inf directory, and open the .inf file in Notepad, and I am greeting with something that looks like the following.

Image of file content

If I compare this output with the output from the advanced printer installation dialog box, I can see similarities. This is shown here.

Image of menu

If I select a printer driver from the previous list, and click Next, the driver installs. I can verify this via the Get-PrinterDriver function, as shown here:

Get-PrinterDriver

The following image shows the command and its output.

Image of command output

I can then use the Get-PrinterDriver function to retrieve the newly installed printer:

Get-PrinterDriver -Name "Brother *"

If I attempt to remove it, however, an error message appears, which states that it is being used by a printer. This command and the error message are shown here.

Image of error message

After I remove the printer that uses the driver, I can then remove the printer—but that is tomorrow’s blog post.

To add a printer driver that exists in the driver store, I need to use the actual driver name. I cannot use wildcard characters, or an error message appears. For example to install the "Brother Laser Leg Type1 Class Driver" that I found in the .inf files, I must use the complete name. This command is shown here:

Add-PrinterDriver -Name "Brother Laser Leg Type1 Class Driver"

If I attempt to use Brother *, an error occurs. This message is shown in the following image.

Image of error message

I can also use the Add-PrinterDriver function to install a print driver by specifying the name of the .inf file for that printer driver. One issue is that often printer drivers are “universal drivers,” and the .inf file contains information for dozens of printers. Therefore, Windows PowerShell will not know which driver to install.

That is all there is to using Windows PowerShell to install printer drivers. Printer week will continue tomorrow when I will talk about removing printers.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Use PowerShell in Windows 8 to Remove Printers

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell 3.0 in Windows 8 to remove printers.

Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife and I have been talking to various people from the Charlotte Windows PowerShell User Group all week about doing another Windows PowerShell Saturday. It is an awful lot of work, but I think we are going to do this again. The Windows PowerShell Saturday in Charlotte sold out within a few days, and there have been many positive comments about the event. That means that people found it to be a valuable experience. So we will have another Windows PowerShell Saturday. (By the way, if you want to have one where you live, let us know via scripter@microsoft.com.)

To remove a printer with Windows PowerShell, I use the Remove-Printer function from the PrinterManagement module. There are two ways to use the Remove-Printer function:

Remove-Printer [-Name] <String> [-AsJob [<SwitchParameter>]] [-CimSession

<CimSession>] [-ComputerName <String>] [-PassThru [<SwitchParameter>]]

[-ThrottleLimit <Int32>] [-Confirm [<SwitchParameter>]] [-WhatIf

[<SwitchParameter>]] [<CommonParameters>]

 

Remove-Printer [-AsJob [<SwitchParameter>]] [-CimSession <CimSession>]

[-PassThru [<SwitchParameter>]] [-ThrottleLimit <Int32>] -InputObject

<CimInstance> [-Confirm [<SwitchParameter>]] [-WhatIf [<SwitchParameter>]]

[<CommonParameters>]

What this means is that if I type the exact printer name, I can use the Remove-Printer function directly. It also tells me that I can pipe a printer object to the function. By pipelining a printer object, I can use wildcard characters.

Begin with Get-Printer

I usually begin things by using a Get type of command. So the first thing I do is use the Get-Printer function to see what printers are defined. The command is shown here:

Get-Printer

The command and its associated output are shown here:

Image of command output

I can use a wildcard character to avoid typing a complete printer name as shown here:

PS C:\> Get-Printer | where name -Like "my*"

 

Name                           ComputerName    Type         DriverName

----                           ------------    ----         ----------

myotherlaser                                   Local        Brother Laser Leg Typ...

Or, I can type the exact printer name and supply it directly to the –Name parameter as shown here:

PS C:\> Get-Printer -Name myotherlaser

 

Name                           ComputerName    Type         DriverName

----                           ------------    ----         ----------

myotherlaser                                   Local        Brother Laser Leg Typ...

Both of these commands return printer objects, and therefore, they can be piped to the Remove-Printer function. This is shown here:

Get-Printer -Name myotherlaser | Remove-Printer

Get-Printer | where name -like "my*" | Remove-Printer

Remember Whatif

Of course, before I add a Remove-Printer object, I want to use the –Whatif switch to ensure that I am doing exactly what I want to do. Here is an example of a near disaster:

PS C:\> Get-Printer | where name -match "my*" | Remove-Printer -WhatIf

What if: Deleting printer Microsoft XPS Document Writer

What if: Deleting printer \\dc1.iammred.net\HP LaserJet 2100 PCL6

What if: Deleting printer myotherlaser

PS C:\>

Luckily, I used –Whatif, so I did not delete a bunch of my printers.

Directly remove a printer

I can use the Remove-Printer function directly to remove a printer if I know the exact name. If I am unsure of the printer name, I use the Get-Printer function to list my printers, and I copy and paste the name. With quick edit mode turned on, I can highlight the printer name with my mouse, press ENTER to copy it to the clipboard, and then right-click to paste it. This is shown in the image that follows.

Image of command output

Here is the command:

Remove-Printer -Name myotherlaser

After I have deleted the printer, I may decide to delete the printer driver and the printer port (if necessary). To do that, I use the following functions:

PS C:\> Get-PrinterDriver -Name "Brother*"

 

Name                                PrinterEnvironment MajorVersion    Manufacturer

----                                ------------------ ------------    ------------

Brother Laser Leg Type1 Class Dr... Windows x64        4               Brother

 

PS C:\> Get-PrinterDriver -Name "Brother*" | Remove-PrinterDriver

I use the Get-PrinterPort function, and I decide that I do not need to remove any printer ports.

That is all there is to using Windows PowerShell to remove printers. This also concludes Printer Week. Join me tomorrow when I will talk about running scripts on remote file shares.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Use PowerShell to Fix Broken Printer

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to fix a printer that keeps losing its duplexer.

Microsoft Scripting Guy, Ed Wilson, is here. One of the cool things about Windows PowerShell is that if I want to, I can generally use it to solve all of my issues. Sometimes the issue is not sufficient that I want to invest the time. However, if it is a persistent issue, and if it occurs often enough, at some point, it might very well tip the scale. The issue I have today is just such a case.

Why can't I use my duplexer?

I believe in printing on both sides of the paper when I have stuff to print out. This, I believe, is a judicious use of resources. Therefore, whenever I buy a printer, I must get one with a duplexer. The printer I have, worked fine, until I upgraded to Windows 8. Now, for some reason, every time I reboot my computer, the printer forgets that it has a duplexer. I suspect this is because the maker of the printer did not update their printer driver.

Three times a month, I go to a writers group, and I have to print a portion of my new book project for peer review. This means that, at a minimum, this issue comes back to haunt me on a recurring basis. If I open Word, I must save my work in Word, close Word, go find my printer, tell the printer it has a duplexer, open Word, find my document and open it, find my spot, and then go into the Word dialog box and tell it to print.

All of this can be enormously monotonous. I have done it often enough that I even know the difference between a printer property and a printer preference. Here is the dialog box:

Image of menu

As I can see from this screenshot, for some reason the printer believes that the duplexer is not installed. I can easily change that, but every time I reboot my computer, it resets to Not installed. I have tried everything I could think of. I have checked for newer printer drivers and I have gone to the print device itself to see if something changed...all to no avail.

This is an issue I have lived with since I upgraded from Windows 7 to Windows 8 to Windows 8.1. I really hoped that upgrading to Windows 8.1 would fix it, but alas, it did not.

Windows PowerShell to the rescue

This little issue is an annoyance, not a real problem. But it is what prompted me to see how to add back in printer options via Windows PowerShell. In fact, I wrote a script to do this very thing:

PS C:\> $hp = Get-Printer -Name *hp*

PS C:\> Set-PrinterProperty -PrinterName $hp.name -PropertyName Config:Duplexer -Value Installed

The previous script worked great for a while, then it quit working.

I decided I needed to do a bit of research as to how all this works. Unfortunately, there is no information about what a permissible PropertyName looks like in the Help. To be honest, however, after doing my research, I am not sure how it could be documented because the property names are set by the people who write the printer driver. I did find a very interesting document. It is the Printer Driver Developers Guide, and it makes for fascinating weekend reading.

The property names themselves are the ones that are in the printer driver data file. So I decided to use Windows PowerShell to retrieve the file and open it in Notepad. The command I used is:

notepad (Get-PrinterDriver -Name *hp*).datafile

The file looks fascinating, and so I decided to take a picture of it. As fascinating as the file is, in the following image, I am on line 1579. The good thing is that I can use the search feature in Notepad to find the information I need. It is shown here:

Image of command output

It looks to me like they changed the name of the feature from Duplexer to DuplexUnit. So, I make a little change. Perfect. It works! Here is the new script:

PS C:\> Set-PrinterProperty -PrinterName $hp.name -PropertyName "Config:DuplexUnit" -Value Installed

PS C:\> Get-PrintConfiguration -PrinterName $hp.name

PrinterName     ComputerName    Collate    Color      DuplexingMode

-----------     ------------    -------    -----      -------------

HP2005DN                        True       False      TwoSidedLongEdge

Another way to find out about available printer properties is to use the Get-PrinterProperty cmdlet, as I use here:

PS C:\> Get-PrinterProperty -PropertyName $hp.name

cmdlet Get-PrinterProperty at command pipeline position 1

Then supply values for the following parameters:

PrinterName: PS C:\> Get-PrinterProperty -PrinterName $hp.name

ComputerName         PrinterName          PropertyName         Type       Value

------------         -----------          ------------         ----       -----

                     HP2005DN             FormTrayTable        String     Config:AutoS...

                     HP2005DN             Config:DuplexUnit    String     Installed

                     HP2005DN             Config:Memory        String     384MB

I can see from the previous output that the property is Config:DuplexUnit. In my previous testing, the Installed value is case sensitive (at least in my testing on my printer), so you may want to keep that in mind.

Note  Of course this cmdlet requires Admin rights, so start the Windows PowerShell console with an elevated account.

Make the change automatic

Now I have found the Windows PowerShell script I need to be able to make the change, and I want to make the change automatically each time my laptop reboots. Therefore, I want a startup job.

Creating a job to run at startup is pretty simple. For more information, read Use PowerShell to Create Job that Runs at Startup.

I decided to use a script block instead of a script because in reality I had a one-liner, and it also makes the job more portable. First I create my startup trigger, then I register the scheduled job. It is two lines of script:

PS C:\> $trigger = New-JobTrigger -AtStartup -RandomDelay 00:00:45

PS C:\> Register-ScheduledJob -Trigger $trigger -ScriptBlock {Set-PrinterProperty -Printer

Name (Get-Printer -Name *hp*).name -PropertyName "Config:DuplexUnit" -Value Installed} -Na

me SetPrinterDuplexer

 

Id         Name            JobTriggers     Command                  Enabled

--          ----                -----------           -------                        -----

2          SetPrinterDu... 1               Set-PrinterProperty -PrinterName (Get... True

Now for the test

I reboot my computer, and the first thing I do is open the Windows PowerShell console with Admin rights to see if my scheduled job ran. As shown here, it did:

Image of command output

Now for the big test. Is my printer set to duplex? I open Control Panel, navigate to my printer, click DeviceSettings, and sure enough, my duplexer is now installed. Sweet.

Image of menu

Two lines of script and I fixed an issue that has vexed me for two years. Not a bad ROI…not bad at all. Windows PowerShell for the win!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PSScriptAnalyzer deep dive – Part 2 of 4

$
0
0

Summary: Thomas Rayner, Microsoft Cloud and Datacenter Management MVP, shows how to suppress, include, and exclude PSScriptAnalyzer rules.

Hello! I’m Thomas Rayner, a Cloud and Datacenter Management Microsoft MVP, filling in for The Scripting Guy this week. You can find me on Twitter (@MrThomasRayner), or posting on my blog, workingsysadmin.com. This week, I’m presenting a four-part series about how to use PSScriptAnalyzer.

Part 1 – Getting started with PSScriptAnalyzer

Part 2 – Suppressing, including, excluding rules

Part 3 – Wrapping PSScriptAnalyzer with Pester to get formatted results

Part 4 – Writing custom rules

This is Part 2, so I’m going to show you how to specifically suppress, include, or exclude certain rules.

Suppressing a rule refers to using special code that’s intermingled with your PowerShell code to tell PSScriptAnalyzer (PSSA) that you don’t want it to check a portion of your code against a specific rule. Including or excluding rules refers to the set of rules that you want to check your entire module or script against.

The ReadMe.md file on the PSScriptAnalyzer GitHub page has a lot of great information about suppressing rules, so I’ll only explain it briefly before I show you how to include or exclude only certain rules.

Check out this code example:

param (

[Parameter()]
[ValidateSet('Yes', 'No', 'Maybe')]
[string]$UsePassword

)

Write-Output "Using a password: $UsePassword"

This script is clearly useless, but it’s an example where I have a parameter name that contains “Password”. Let’s see what PSSA says about that.

Result of example where a parameter name contains “Password”

PSSA says that I shouldn’t have parameters that take passwords in plain text. The recommended practice for sending passwords to parameters is to either send them as part of a PSCredential object or, as recommended by PSSA, as a SecureString object. But, this script isn’t passing a password. Presumably, it’s telling another part of the script whether it’s using a password or not and, as far as we can tell from this context, it’s totally safe.

In this example, I know that I can safely ignore this warning. I can use the following code to tell PSSA to ignore that rule for that parameter.

[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingPlainTextForPassword", "")]

param (

[Parameter()]
[ValidateSet('Yes', 'No', 'Maybe')]
[string]$UsePassword

)

Write-Output "Using a password: $UsePassword"

Now, when I run Invoke-ScriptAnalyzer, I don’t get any output, which means that PSSA doesn’t have any rule violations to notify me about.

There is a ton more to know about suppressing rules, but it’s explained well on the PSSA GitHub page, and you should read about it there.

Excluding and including rules for an entire script is a different matter, though. Let’s change up our example code. Now we’re going to look at something that might be a controller script that you run to launch other tools and work with them.

#requires -Module ActiveDirectory,AzureRm,MyCustomModule
$options = @"
1. New User
2. Delete User
3. New Azure VM
4. Some Line Of Business Thing
"@

Write-Host $options

$choice = Read-Host "What do you want to do?"

Write-Host "You picked $choice"

if ($choice -eq "1") {

New-CustomUser

}

This script uses Write-Host, which, if you read yesterday’s post, you know is a rule violation. This is a controller script, though. This thing is only going to be run interactively as a user sits at a console and interacts with the script. In this limited scenario, it totally makes sense to use Write-Host. Because I know that, I can tell PSSA to ignore that rule for my entire script instead of suppressing it for specific lines and scenarios.

Invoke-ScriptAnalyzer -Path .\MyScript.ps1 -ExcludeRule PSAvoidUsingWriteHost

Running this against my example code will return nothing because PSSA has no violations to report since I told it to exclude the PSAvoidUsingWriteHost rule for my whole script.

The -IncludeRule parameter works a little bit differently. When you use it, you effectively tells PSSA to exclude all rules, except the one that you explicitly include. So, if I ran this…

Invoke-ScriptAnalyzer -Path .\MyScript.ps1 -IncludeRule PSAvoidUsingWriteHost

… I would have PSSA reporting violations only about the PSAvoidUsingWriteHost rule. You can pass an array of rules and have it check only a few.

Why would you ever want that? Well, maybe your script is longer than the 16-line example that I shared earlier. It can take some time for PSSA to evaluate large scripts and modules. Perhaps you want to quickly check if your teammate removed all the aliases from a 100,000-line script like he said he did. Using -IncludeRule, you don’t have to wait for PSSA to evaluate all the other rules, too. You can quickly find out if there are any aliases in the script by checking only that rule.

Join me tomorrow, and I’ll show you how to use Pester to get differently formatted results from PSScriptAnalyzer.

Thomas, that was some excellent reading! It’s amazing how much you can use this tool to verify your own scripts!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow.

Until then, always remember that with Great PowerShell comes Great Responsibility.

Sean Kearney
Honorary Scripting Guy
Cloud and Datacenter Management MVP

PowerTip: Convert from UTC to my local time zone

$
0
0

Summary: Cloud and Datacenter Management MVP, Thomas Rayner, shows how write a function to convert from UTC to your local time zone.

Hey, Scripting Guy! Question I have a time that I’d like to convert from UTC to my local time zone. How can I do this?

Hey, Scripting Guy! Answer You can write your own function to do this, and use the [System.TimeZoneInfo] .NET class and associated methods to make this conversion easily. Here is an example:

function Convert-UTCtoLocal

{
param(
[parameter(Mandatory=$true)]
[String] $UTCTime
)

$strCurrentTimeZone = (Get-WmiObject win32_timezone).StandardName
$TZ = [System.TimeZoneInfo]::FindSystemTimeZoneById($strCurrentTimeZone)
$LocalTime = [System.TimeZoneInfo]::ConvertTimeFromUtc($UTCTime, $TZ)
}

The Doctor

PSScriptAnalyzer deep dive – Part 3 of 4

$
0
0

Summary: Thomas Rayner, Microsoft Cloud and Datacenter Management MVP, shows how to use Pester to get nUnit formatted results out of PSScriptAnalyzer.

Hello! I’m Thomas Rayner, a Cloud and Datacenter Management Microsoft MVP, filling in for The Scripting Guy this week. You can find me on Twitter (@MrThomasRayner), or posting on my blog, workingsysadmin.com. This week, I’m presenting a four-part series about how to use PSScriptAnalyzer.

Part 1 – Getting started with PSScriptAnalyzer

Part 2 – Suppressing, including, excluding rules

Part 3 – Wrapping PSScriptAnalyzer with Pester to get formatted results

Part 4 – Writing custom rules

This is Part 3, and I’m going to show you how to get nUnit formatted results from PSScriptAnalyzer.

nUnit is an XML based open source test result format. nUnit is a testing format that’s used in popular Continuous Integration and Continuous Deployment services like AppVeyor and Visual Studio Team Services. There are ways to get differently formatted and recorded output from PSScriptAnalyzer. By default, most people I’ve worked with have found it easiest to use Pester to format their test results.

Pester is a domain-specific language that was developed to test PowerShell code. There are a ton of great resources to learn Pester. If what you see in this post is very unfamiliar, I’d recommend finding a book, online training course, blog, or other resource to get started. The Pester GitHub page is a great place to start.

Note: Procedures in this post are for Pester version 3.4.6 and haven’t been tested on 4.x yet.

If you don’t already have Pester installed, you can get it from the PowerShell Gallery and import the module.

Install-Module -Name Pester -Scope CurrentUser

Import-Module -Name Pester

Now you’re ready to do some Pester testing.

First, let’s define the script, named MyScript.ps1, that we want to evaluate.

param (

$Path,
$DaysOld

)
$someVar = $null
Write-Host "Counting items..."
$itemCount = (gci $Path | ? { $_.LastWriteTime -gt (Get-Date).AddDays(-$DaysOld)}).Count
Write-Host "There are $itemCount items"

There are some clear problems with this already, but, if it were perfect, it wouldn’t generate interesting PSSA output.

Now, I’m going to write a Pester test. I’ll start by declaring a Describe and Context block.

Describe 'Testing against PSSA rules' {

Context 'PSSA Standard Rules' {

}

}

Now, something should probably go inside, right? I’m going to run PSSA against my script, store the variables in $analysis, and get a list of rules stored in $scriptAnalyzerRules. (I could also specify a location to custom rules here if I had any.)

$analysis = Invoke-ScriptAnalyzer -Path '.\MyScript.ps1'

$scriptAnalyzerRules = Get-ScriptAnalyzerRule

Why did I do this? What I’m going to do next is use a foreach loop to build all my It statements, and test for all the rules we’re evaluating.

forEach ($rule in $scriptAnalyzerRules) {

It "Should pass $rule" {

If ($analysis.RuleName -contains $rule) {

$analysis |

Where RuleName -EQ $rule -outvariable failures |
Out-Default

$failures.Count | Should Be 0

}

}

}

Here, I’m looping through all the rules that PSSA checked and declaring that my script should pass that rule. Any PSSA rule violations are stored in $analysis. So, if $analysis.RuleName has an entry in it that matches a rule, I know that this assertion failed.

Here’s what my full test file, MyTest.tests.ps1, looks like.

Describe 'Testing against PSSA rules' {

Context 'PSSA Standard Rules' {

$analysis = Invoke-ScriptAnalyzer -Path  '.\MyScript.ps1'

$scriptAnalyzerRules = Get-ScriptAnalyzerRule

forEach ($rule in $scriptAnalyzerRules) {

It "Should pass $rule" {

If ($analysis.RuleName -contains $rule) {

$analysis |

Where RuleName -EQ $rule -outvariable failures |
Out-Default

$failures.Count | Should Be 0

}

}

}

}

}

Now, I just need to run one line to get my nUnit formatted test results.

Invoke-Pester -OutputFile 'PSSAResults.xml' -OutputFormat 'LegacyNUnitXml' -Script '.\MyTest.tests.ps1'

Done! Presumably, if you’re doing this, it’s to interface with some Continuous Integration/Deployment service like AppVeyor or Visual Studio Team Services. You will have a couple steps left to integrate this solution seamlessly into your Continuous Integration/Deployment suite of choice to properly collect and act on the test results. This is the funky part – getting nUnit formatted results from PSSA.

There are other ways to get the same result, but this is a pretty easy, scalable solution that I happen to like. If it doesn’t work for you for some reason, I’d love to hear about your alternative!

Now, if you’ve integrated this into Visual Studio Team Services, for example, you’ll get some cool screens like these:

Looking at the test results in Visual Studio Team Services.

Build results that include Pester results

Reviewing detailed results for the PSSA results to see which rules were violated.

Pass / fail results

I have a much longer post on my blog that explains how to set this up Visual Studio Team Services. If that interests you, feel free to check out Invoking Pester and PSScriptAnalyzer Tests in Hosted VSTS.

Come back tomorrow, and I’m going to show you how to write your own customized PSScriptAnalyzer rules.

Thomas, thanks for that update!  Using Pester as an addition tool is certainly an excellent way to QA my tools before going into production!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow.

Until then, always remember that with Great PowerShell comes Great Responsibility.

Sean Kearney
Honorary Scripting Guy
Cloud and Datacenter Management MVP

PowerTip: Get a list of suspended Azure Automation jobs

$
0
0

Summary: Cloud and Datacenter Management MVP, Thomas Rayner, shows how get a list of all your suspended Azure Automation jobs.

Hey, Scripting Guy! Question I think I have an Azure Automation job that is getting suspended for running too long. How can I verify this?

Hey, Scripting Guy! Answer You can use the Get-AzureRmAutomationJob cmdlet and its -Status flag to get this information. Here is an example:

Get-AzureRmAutomationJob -ResourceGroupName $ResourceGroupName -AutomationAccountName $AutomationAccountName -Status Suspended

You can also use other values like Finished, Running, and Starting in that place.

The Doctor

PSScriptAnalyzer deep dive – Part 4 of 4

$
0
0

Summary: Thomas Rayner, Microsoft Cloud and Datacenter Management MVP, shows how to write a custom PSScriptAnalyzer rule.

Hello! I’m Thomas Rayner, a Cloud and Datacenter Management Microsoft MVP, filling in for The Scripting Guy this week. You can find me on Twitter (@MrThomasRayner), or posting on my blog, workingsysadmin.com. This week, I’m presenting a four-part series about how to use PSScriptAnalyzer.

Part 1 – Getting started with PSScriptAnalyzer

Part 2 – Suppressing, including, excluding rules

Part 3 – Wrapping PSScriptAnalyzer with Pester to get formatted results

Part 4 – Writing custom rules

This is Part 4, so let’s look at how to write your own custom PSScriptAnalyzer rules.

At this time, PSScriptAnalyzer comes with a total of 45 rules that are based on community best practices. PowerShell team members at Microsoft and the community developed these rules. The built-in rules are a great baseline, and a good starting point that will quickly tell you if a script or module has any glaring flaws before you get too deep into it. That’s great, but what if you or your team has some more stringent standards, or you want to borrow the PSSA engine to check scripts for some other reason? You’ll need a custom rule.

Before we go further, here’s the script that I’m going to be testing today, saved as MyScript.ps1. It’s pretty useless because I’m just trying to highlight some PSSA functionality.

function Get-Something {

param (
[string]$Words

)

Write-Host "You said $Words"

}

function Get-MYVar {

param (
[string]$VariableName

)

$results = $null

Get-Variable -Name $VariableName

}

Like with most of my other pieces of example code in this series, and especially if you’ve been following the rest of this series, you should already see some things that are going to trigger some PSSA rule violations.

Result of running MyScript.ps1 to trigger PSSA rule violations

I’m declaring a variable that I never actually use, and I’m using Write-Host. Both actions are violations of standard PSSA rules.

Maybe there are more issues with my script, though. Perhaps in my organization, it is against my style and standards guidelines to have a function that has adjacent capital letters. Instead of having Get-MYVar, I should have Get-MyVar. Plenty of people support this rule because it increases readability. Instead of something like Get-AzureRMVM, you can have Get-AzureRmVm, which is more readable.

PSSA didn’t tell me about my function whose name has adjacent capital letters, though. I know that I can use the regex pattern ‘[A-Z]{2,}’ to detect two capital letters in a row, but how do I write a PSSA rule?

To write custom PSScriptAnalyzer rules, you’ll need at least basic knowledge of the PowerShell abstract syntax tree. The PowerShell abstract syntax tree is a tree-style representation of the code that makes up whatever you’ve written. Tools that are built into .NET and PowerShell parse files for examination by tools like, but not limited to, PSSA. Doing a deep dive on the abstract syntax tree could be its own five-part series. If that’s something you’d like to see, contact me by using my information at the beginning of this post. If the demand is there, I will put one together. For now, I’m just going to recommend that you do a little independent learning if what you see in this post is too far over your head. The abstract syntax tree is a bit of an abstract concept (pun intended) to get into, but a few really good blog posts and info pages are out there already.

So, let’s get into it. PSSA custom rules are just PowerShell functions. I’m going to make a new file named MyRule.psm1 and start to build my function. Note that the file needs to be a .psm1. Otherwise, it won’t work properly.

function Test-FunctionCasing {

[CmdletBinding()]
[OutputType([PSCustomObject[]])]
param (

[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[System.Management.Automation.Language.ScriptBlockAst]$ScriptBlockAst

)

}

My custom rule is going to be named Test-FunctionCasing. Custom PSSA rules output PSCustomObject objects and take some form of abstract syntax tree object as input. For this scenario, I want to use the ScriptBlockAst objects in my script because that’s the part of the tree that will give me what I need to check function names.

Note: I’ve started a module, called AstHelper, that’s geared towards helping people discover and use the abstract syntax tree. It’s available on the PowerShell Gallery (Find-Module AstHelper), and if you’d like to contribute, the source is on GitHub (https://github.com/ThmsRynr/AstHelper). At this time, it’s very much in it’s infancy, but I still use it to discover what types of abstract syntax tree objects are in PowerShell scripts and modules, and what kind of objects are in there that are of “AST type ___”. Jason Shirk also built a cool module for exploring abstract syntax tree (https://github.com/lzybkr/ShowPSAst).

Back to our custom rule. I’m going to add a process block next.

process {

try {

$functions = $ScriptBlockAst.FindAll( { $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] -and

$args[0].Name -cmatch '[A-Z]{2,}' }, $true )

}

catch {

$PSCmdlet.ThrowTerminatingError( $_ )

}

}

Here, I’m getting all the functions that have a name that matches my regex pattern of “two adjacent capital letters”. I’ve wrapped this in a try / catch, just in case. The .FindAll() syntax is somewhat robust, and it can be daunting if you are not familiar with it. Microsoft has it well documented at Ast.FindAll Method (Func<Ast, Boolean>, Boolean).

Now, I just need a foreach loop to go through all the functions that matched the pattern and report them to PSSA.

foreach ( $function in $functions ) {

[PSCustomObject]@{

Message  = "Avoid function names with adjacent caps in their name"
Extent   = $function.Extent
RuleName = $PSCmdlet.MyInvocation.InvocationName
Severity = "Warning"
}

}

All I need to do is specify a message, an extent (built in to the result stored in $function), the rule name that it violated (which is the name of the PowerShell function I’m building), and severity.

My entire, assembled rule looks like this.

function Test-FunctionCasing {

[CmdletBinding()]
[OutputType([PSCustomObject[]])]
param (

[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[System.Management.Automation.Language.ScriptBlockAst]$ScriptBlockAst

)

process {

try {

$functions = $ScriptBlockAst.FindAll( { $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] -and

$args[0].Name -cmatch '[A-Z]{2,}' }, $true )
foreach ( $function in $functions ) {
[PSCustomObject]@{
Message  = "Avoid function names with adjacent caps in their name"
Extent   = $function.Extent
RuleName = $PSCmdlet.MyInvocation.InvocationName
Severity = "Warning"
}

}

}

catch {

$PSCmdlet.ThrowTerminatingError( $_ )

}

}

}

Now, I save MyRule.psm1, and I can include it when I run Invoke-ScriptAnalyzer.

Invoke-ScriptAnalyzer -Path .\MyScript.ps1 -CustomRulePath .\MyRule.psm1

And I get back a violation that looks just like you’d think it should.

Example of a violation

But wait. Doesn’t MyScript.ps1 violate some of the standard rules too? Where are those violations?

Well, if we want to include the standard rules when we’re using custom rules, we just need to add one parameter to our Invoke-ScriptAnalyzer command.

Invoke-ScriptAnalyzer -Path .\MyScript.ps1 -CustomRulePath .\MyRule.psm1 -IncludeDefaultRules

That looks better!

3-hsg-020317

That’s it! This concludes my four-part deep dive on PSScriptAnalyzer. Hopefully, you’ve learned something about PSSA and have seen that even though this was a deep dive, the rabbit hole goes much deeper.

Happy scripting!

Thomas! That was a great set, and now you’ve got my brain churning! I’ll be sure to have my PC wrapped this weekend playing with all these cool new ideas! Thanks!

I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow.

Until then, always remember that with Great PowerShell comes Great Responsibility.

Sean Kearney
Honorary Scripting Guy
Cloud and Datacenter Management MVP


PowerTip: Get a list of security patches installed in the last 90 days

$
0
0

Summary: Cloud and Datacenter Management MVP, Thomas Rayner, shows how to get a list of all the security patches installed in the last three months.

Hey, Scripting Guy! Question How can I list all the security patches that I’ve installed in the last 90 days?

Hey, Scripting Guy! Answer There’s a class for that! Just use Get-CimInstance, and you can retrieve this information. Here is an example:

Get-CimInstance -Class win32_quickfixengineering | Where-Object { $_.InstalledOn -gt (Get-Date).AddMonths(-3) }

The Doctor

Debugging PowerShell script in Visual Studio Code – Part 1

$
0
0

Summary: Here’s a look at the many features of the PowerShell debugger for Visual Studio Code.

In previous blog posts, we covered how to get started with PowerShell development in Visual Studio Code and the editing features of Visual Studio Code and the PowerShell extension.  If you don’t already have Visual Studio Code configured with the PowerShell extension, read those blog posts to get caught up.

In the first of this two-part series, we will cover the many features of the PowerShell debugger for Visual Studio Code.  These features are provided by the PowerShell extension, or, more accurately, by the PowerShell Editor Services module which comes with the PowerShell extension.

PowerShell Editor Services runs in a separate process and supplies both language and debugging services to Visual Studio Code via a JSON remote procedure call (RPC) protocol that’s defined by Visual Studio Code. One advantage of this approach is that a crash of the PowerShell Editor Services process doesn’t cause Visual Studio Code to crash. And, with the latest version of the PowerShell extension, you can simply restart the current PowerShell session without restarting Visual Studio Code to get going again.

First look at the PowerShell Debugger in Visual Studio Code

Press Ctrl+Shift+P (Cmd+Shift+P on Mac) to open the PowerShell extension’s Examples folder, type PowerShell open examples, and then press Enter. After the Examples folder has loaded, open the DebugTest.ps1 file, and set a breakpoint on the line that has the Start-Sleep command.  To set the breakpoint, either click in the left editor margin or press F9 to toggle the breakpoint on and off for the current line.

To open the Debug view, in the View Bar select Debug from the View menu or press Ctrl + Shift + D. In the Launch Configuration dropdown (shown in the following screenshot), select the PowerShell Launch (current file) configuration. Like the PowerShell integrated scripting environment (ISE), this configuration will execute the file that’s in the active editor window under the debugger when debugging is started.

Selecting the PowerShell Launch (current file) configuration

Let’s start a debug session. First, make sure the DebugTest.ps1 file’s editor window is still the active window, and then press F5 or click the green Start Debugging button to the left of the Launch Configuration dropdown (shown in the previous screenshot).

After the debugger starts, you will see the Debug actions pane (shown in the following screenshot), and the debugger should pause at the breakpoint that you set.

Debug actions pane

The Debug actions pane provides buttons for:

  • Continue / Pause – F5
  • Step Over – F10
  • Step Into – F11
  • Step Out – Shift + F11
  • Restart – Ctrl + Shift + F5
  • Stop – Shift + F5

Now, let’s look at the Debug view features that are available during a debug session.

Screenshot of debug session

The VARIABLES section of the Debug view allows easy inspection of variable values. The Auto group weeds out the PowerShell automatic variables and leaves just the variables you’ve defined and are likely interested in seeing. However, if the variable you are looking for isn’t listed in Auto, you can look for it in the Local, Script, or Global groups.

The WATCH section allows you to specify a variable or expression whose value should always be displayed.

The CALL STACK section displays the call stack, and you can also select a different frame in the call stack to examine calling functions, scripts, and the variables that are defined in those scopes.

The BREAKPOINTS section provides a central UI to manage, that is, create, disable, enable, and delete breakpoints that you may have defined over many different script files.

You can also see from the previous screenshot that you get hover tips when you hold the cursor over a variable. Hover tips can show simple values, like numbers and strings, or complex objects as shown in the following screenshot:

Example of hover tips

VARIABLES section

The VARIABLES section allows you to inspect variable values, including complex variables such as those shown in the following screenshot:

Examining complex variables

For primitive variable types, the value is displayed directly, typically as numbers, strings, and Booleans.  For non-primitive variables, the type information is displayed. If the type is a collection or an array, the number of elements is displayed as well.

You can do more than just inspect variable values. To change those values, double-click the value that you want to change, enter a new value, and click outside the edit box to complete the operation.

You can enter arbitrary expressions when setting a variable’s value, for example, $itemCount+10 or $null or $true.  Just remember, the expression has to be valid PowerShell syntax.

Example of arbitrary expressions to setting a variable’s value

Watch section

The WATCH section allows you to add a watch for any variable or expression. Simply click the + button (highlighted in the following screenshot), and type the variable name or a PowerShell expression:

The plus (+) button in the Watch section

This values will always be evaluated, if possible. Keep in mind that the variables entered as a watch may not be available in all scopes.

BREAKPOINTS

Besides setting line breakpoints, the PowerShell debugger allows you to set function breakpoints, conditional breakpoints, and tracepoints.

Function breakpoints

Function breakpoints are effectively the same as a command breakpoint that you can set by using Set-PSBreakpoint with the -Command parameter. You can set a function breakpoint to break into the debugger not only on a particular function invocation but also on an alias, a built-in command, or application invocation.

To set a function breakpoint, hover over the BREAKPOINTS section title bar, click the + button, type Write-Output, and then press Enter as shown in the following screenshot.

Setting a break point

Remove the line breakpoint that we set earlier on the line that executes the Start-Sleep command.

Press F5 to start debugging the DebugTest.ps1 script, and you will see the debugger stop everywhere Write-Output is called. You can tell when the debugger is stopped on a function breakpoint by looking at the CALL STACK section of the Debug view. It will indicate that it is paused on a function breakpoint. The Debug Console will also indicate that a breakpoint has paused the debugger as shown in the following screenshot. If the Debug Console is not visible, select Debug Console from the View menu, or press Ctrl + Shift + Y.

Select Debug Console from the View menu

Stop debugging (press Shift + F5), and remove the function breakpoint by right-clicking it in the BREAKPONTS section, and selecting Remove breakpoint.

Conditional breakpoints

A conditional breakpoint is a line breakpoint that breaks into the debugger only when the line is executed and a user-supplied expression evaluates to $true. Conditional breakpoints are handy in scenarios where a line is executed many times, but you’re interested in breaking into the debugger only when a certain “condition” is true.

Let’s set a conditional breakpoint on the $i = $i + 1 line in DebugTest.ps1. Right-click the line, and select Add Conditional Breakpoint…. Enter the expression, $i % 10 -eq 0, as shown in the following screenshot, and then press Enter. As in the case with setting the value of a variable in the VARIABLES section, you have to use PowerShell syntax in the condition expression.

Setting a conditional breakpoint

After you set the breakpoint, you will see that conditional breakpoints are displayed with an “=” sign in the glyph:

Display of conditional breakpoints

When this expression evaluates to $true, the debugger will pause execution. Now press F5 to start debugging. You will notice the debugger stops when $i is 10, 20, 30, 40 and 50. Stop debugging (Shift + F5).

Tracepoints

Tracepoints allow you to emit information to the Debug Console (or change state in your script) without ever pausing the debugger. These are effectively the same as using Set-PSBreakpoint -Action {scriptblock} where the scriptblock tests for a certain condition, and if met, executes some script and then uses Continue to resume execution.

Let’s convert our previous conditional breakpoint to a tracepoint. Right-click the conditional breakpoint in the left editor margin, select Edit Breakpoint…, and modify the condition to:

Converting a conditional breakpoint to a tracepoint

Press F5 to start debugging. You will notice that the script runs to completion without ever breaking into the debugger. In the Debug Console (Ctrl + Shift + Y), you will see the following output from this tracepoint:

Display of output from the tracepoint in the Debug Console

Hit count for breakpoints

Line breakpoints support not only condition expressions but hit counts as well. When you specify a hit count, the PowerShell debugger notes the number of times that the breakpoint has been encountered and only breaks into the debugger after the specified hit count has been reached.

Let’s set a line breakpoint with a hit count. First, remove all previous breakpoints in DebugTest.ps1 by using the Remove All Breakpoints button as highlighted in the following screenshot:

Remove All Breakpoints button

Now set a line breakpoint (F9) on the line: $i = $i + 1. Right-click the Red breakpoint glyph, and select Edit Breakpoint…. Then, click the dropdown, and select Hit Count:

Selecting Edit Breakpoint

This UI allows you to set both Expression and Hit Count to have a conditional breakpoint that obeys the specified hit count. Let’s set the hit count to 25. Press Enter to complete setting the hit count.

Setting Hit Count to 25

Press F5 to start debugging, and you will see the debugger stop when $i is 25. After you press F5 again to continue execution, the breakpoint is not hit again.

In this blog post, we looked at the debugging features of Visual Studio Code and the PowerShell extension.  All debugging examples in this post used a project that had the debugger “preconfigured”. In Part 2 of this series, we will look at how to configure the debugger to launch and debug your scripts.

I think you’ll find the PowerShell debugging experience in Visual Studio Code to be quite productive.  Of course, if you do find a bug, please be sure to submit an issue at https://github.com/PowerShell/vscode-powershell/issues so that we can continue to improve the debug experience for everyone.


Keith Hill

Software Engineer
PowerShell MVP

 

Announcing the 2016 Honorary Scripting Guys

$
0
0

Summary: The Honorary Scripting Guys for 2016 are announced.

Microsoft Scripting Guy, Ed Wilson, is here. Well, it has been an awesome year for Windows PowerShell and for the community. The Scripting Wife and I had the opportunity to speak at lots of user groups and conferences last year. This involved many, many miles of travel. Luckily, the community pitched in and helped – a lot – with the Scripting Guys blog … and so, once again, it was the top Microsoft blog. The cool thing about a strong community – and of community contributions to the blog is that we can get lots of different views about how we can use Windows PowerShell to solve real-world problems.

So, last year, we had lots of guest blog posts written by a wide variety of scripters. But, there were a few who stood out as making a significant contribution to the blog and to the Windows PowerShell community. It is, therefore, time to name the newest Honorary Scripting Guy.

What does it take to become an official Honorary Scripting Guy? It takes an extreme commitment to the scripting community, a remarkable dedication that helps to spread the good word about Windows PowerShell, and a relentless pursuit of excellence in producing exceptional content. To read more about the Honorary Scripting Guys, see:

Honorary Scripting Guy logo

Which brings us up to date. Now, without much delay, I am happy to introduce you to a new Honorary Scripting Guy, Thomas Rayner.

Here is a little more info about him.

2016 Honorary Scripting Guy’s biography

Thomas Rayner

Thomas Rayner is a Microsoft Most Valuable Professional (MVP) with over 15 years of experience in the IT industry. His background is in Cloud and Datacenter Management, specializing in DevOps, systems and process automation, and PowerShell. Thomas is a prominent international speaker, best-selling author, and instructor covering a vast array of IT topics. Thomas is very active within the technical community and a variety of Microsoft technical and strategic teams. He is the President of the Edmonton Microsoft User Group.

By day, Thomas works for PCL Constructors on their DevOps and Automation team. Thomas enjoys working with a wide variety of different products and technologies, particularly emerging and disruptive technologies and automation-related products. His position with PCL affords him the luxury of facing interesting challenges every day.

Twitter: @MrThomasRayner

Here is a list of previous year’s Honorary Scripting Guy recipients who have made a significant contribution to the Hey, Scripting Guy! Blog in 2016. If they were not already Honorary Scripting Guys, they would be!

Congratulations to our newest Honorary Scripting Guy, and also a huge thanks to all the guest bloggers.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. Also check out my Microsoft Operations Management Suite Blog.

Ed Wilson
Microsoft Scripting Guy

Cloud operating system deployment: WinPE in Azure

$
0
0

Jason Ryberg is a Consultant for Microsoft, where he writes PowerShell code and provides DevOps support.  Have you ever wanted to boot to WinPE in Azure and select an Microsoft Deployment Toolkit (MDT) Task Sequence?  As part of an informal cloud-readiness evaluation, I was asked to deploy a server image to Azure. The image that I was given would run the MDT Deployment Wizard to allow for the selection of deployment options. But, how could I choose deployment options if I could not reach the Azure virtual machine in the boot environment? Read on to find out how to interact with an Azure virtual machine in a preboot environment!

Introduction

Plenty of moving pieces need to be precisely configured as part of the process. The following diagram describes the relationship of the components and visualizes the required configurations. Before you begin, ensure that you have met all the system requirements.

Diagram of relationship of the components and visualization of required configurations

System requirements

  • Install MDT and WADK as usual with default values.
  • Ensure that you have proper operating system installation media. Use Volume Licensed images if possible. (This post covers deploying Windows Server 2012 R2.)
  • Create an account for Azure Storage and add Blob storage
  • Ensure that Hyper-V Services is installed on your workstation.

It’s used to create a custom VHD.

  • Track down the install bits for the Diagnostics and Recovery Toolset (DaRT), which is part of the Microsoft Desktop Optimization Pack (MDOP) and requires a Visual Studio subscription or Software Assurance. Because the MDOP requires Software Assurance to use legally, I cannot provide a direct link. DaRT is used to interact with the WinPE console remotely to complete the Task Sequence Wizard in Azure.

Task sequence modifications

Through the process of elimination, I discovered that many of the default steps in a Server Deployment Task Sequence end up breaking a virtual machine deployment in Azure. The following steps can be removed or disabled to ensure proper deployment:

  1. “Preinstall – New Computer Only – Format and Partition Disk”
  2. “Preinstall – Inject Drivers”
  3. “Postinstall – Add Windows Recover (WinRE)”

Now there are steps to add to get DaRT integrated with your task sequence:

  1. In the Install – Install Operating System step, specify the partition to install the operating system.
    1. Change the Select the location where you want to apply this operating system dropdown to Specific disk and partition.
    2. Select 0 for Disk and 2 for Partition.
  2. Add the “Postinstall – Copy Scripts to OS Drive” to include a Diskpart script that’s used to remount the WinPE partition after reboot.
    1. Modify the “Postinstall – Copy Scripts to OS Drive” command-line textbox to the following text:
      cmd /c mkdir %OSDisk%\Scripts && copy %SCRIPTROOT%\assignPEletter.txt %OSDisk%\Scripts\assignPEletter.txt
    2. Copy the following text into a file named assignPEletter.txt:
      select volume 1
      assign letter k
      exit
    3. Copy the assignPEletter.txt file to the Scripts directory of the Deployment Share.
  1. An additional step is required to ensure that the assignPEletter.txt file is run before the operating system is started.
    1. Click the OS Info tab in the Task Sequence dialog box, and then click the Edit Unattend.xml
    2. Expand the 7 oobeSystem item, the amd64_Microsoft-Windows-Shell-Setup_neutral item, and the FirstLogonCommands
    3. Right-click the FirstLogonCommands item, and then click New SynchronousCommand.
    4. Copy “cmd /c diskpart.exe /s %SystemDrive%\Scripts\assignPELetter.txt” to the Commandline Add an appropriate description, and change the Order to 2.
    5. Close the Unattend.xml file and confirm saving the file.

MDT Media

Before deployment, the Task Sequence requires a delivery method, which MDT refers to as Media. Many options are available to creating Task Sequence Media. However, this post will only touch on the pertinent steps. Create Media if you do not have one, open your Media’s properties, and ensure that SkipBDDWelcome=YES is listed in the Rules textbox and the Bootstrap.ini textbox. By skipping the welcome page, the deployment wizard and the DaRT utility launch automatically. To install DaRT in your media, first download and install DaRT, then copy the required files from the DaRT installation path to your MDT directory.

Copy-Item 'C:\Program Files\Microsoft DaRT\v10\Toolsx64.cab' 'C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64'

After you copy the file, open your Media’s properties, click the Features tab, and check the box next to Microsoft Diagnostics and Recovery Toolkit (DaRT) at the bottom of the list. By including the DaRT feature in the WinPE environment, you now have a method to remotely interact with WinPE. Finally, copy the DartRemoteViewer.exe file to your workstation. The Dart Remote Viewer is the tool that we launch to interact with our Azure virtual machine. But, how exactly do you connect to the WinPE environment?

Reaching the connection data

DaRT generates an XML configuration file – Inv32.xml – that contains the connection information, but the file is generated locally on the Azure virtual machine. Because you can’t reach the virtual machine in the preboot environment, the file will need to be copied to a readable location.

Figure 1 Example Inv32.xml file

The XML configuration file that DaRT generates

To perform this task, I supplemented the check for DaRT bits in MDT’s LiteTouch.wsf script to mount Azure File storage as a local drive and copy the Inv32.xml file to that drive. Replace the If block starting on line 1576 in the LiteTouch.wsf file with the following If block.

If oFSO.FileExists(oEnv("SystemRoot") & "\System32\inv32.xml") then

oShell.Run "cmd.exe /C a.           net use [drive letter] \\[storage account].file.core.windows.net\[file service] /u:[storage account] [storage account access key] && copy /Y x:\windows\system32\inv32.xml J:\inv32.xml"

ElseIf not oFSO.FileExists(oEnv("SystemRoot") & "\System32\inv32.xml") then

oLogging.CreateEntry "Unable to find the inv32.xml file, DaRT remote control is not running.", LogTypeInfo

Exit Sub

End if

When the virtual machine loads WinPE, the MDT deployment process will load the LiteTouch.wsf script, which then checks for the Inv32.xml file. If the script finds the file, it attempts to mount Azure File storage and copy the file to the locally mapped drive. (After the file is copied to Azure File storage, you can mount the same Azure File storage to your workstation and view the connection file. More on that later.)

Note: Currently this method requires that your workstation be connected to the same virtual network as the target virtual machine. This is due to the default DaRT configuration connecting to the virtual machine’s private IP. Efforts to connect the DaRT service to the public IP are in progress.

VHD customization

Next, we create the custom virtual hard disk (VHD) to contain our MDT Media content. Before beginning, please ensure that your image meets all the Azure prerequisites. Then, create the VHD container with two partitions – one for WinPE and one for the operating system bits. The VHD is, of course, created by using PowerShell.

  1. Create a VHD with two partitions – one for WinPE and one for the operating system. Note that the VHD is a fixed size, because Azure does not support dynamic VHDs.

Import-Module Hyper-V

$LocalVhdPath = “C:\VHDs\winpeAzure.vhd”

$VhdSize = 21GB

$WinPePartitionSize = 7GB

$FsLabel = "WinPE"

$OsDiskLabel = "OSDisk"

New-VHD -Path $LocalVhdPath -SizeBytes $VhdSize -Fixed

Mount-DiskImage -ImagePath $LocalVhdPath

$mountedDisk = Get-DiskImage -ImagePath $LocalVhdPath

Initialize-Disk -Number $mountedDisk.Number -PartitionStyle MBR

New-Partition -DiskNumber $mountedDisk.Number -Size $WinPePartitionSize -AssignDriveLetter -IsActive | Format-Volume -FileSystem NTFS -NewFileSystemLabel $FsLabel -confirm:$false

New-Partition -DiskNumber $mountedDisk.Number -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel $OsDiskLabel -confirm:$false

  1. Update the Media, and copy the Media folder contents to the WinPE partition of the VHD that you just created.

$MdtDrive = "DS001"

$DeploymentShare = "C:\DeploymentShare"

$MediaName = "Media001"

Add-PSSnapin -Name Microsoft.BDD.PSSnapIn
#Check for MDT Drive and mount if not
if (!(Test-Path "$($MdtDrive):"))
{

New-PSDrive -Name $MdtDrive -PSProvider MDTProvider -Root $DeploymentShare

}

$mdtMediaPath = "$($MdtDrive):\Media\$MediaName"

Update-MDTMedia -Path $mdtMediaPath

Mount-DiskImage $LocalVhdPath -ErrorAction Ignore

$mountedDisk = Get-DiskImage -ImagePath $LocalVhdPath

$winPePartition = Get-Volume -FileSystemLabel $FsLabel

$driveLetter = $winPePartition.DriveLetter

$tsMediaContentPath = "C:\Media\Content"

Write-Output "Copying updated content to VHD"

Copy-Item $tsMediaContentPath\* "$($driveLetter):\" -Recurse -Force

Dismount-DiskImage $LocalVhdPath

  1. Copy the VHD to your Azure subscription.

$SubscriptionId = "[SUBSCRIPTION_GUID]"

$StorageAccountName = "[STORAGE ACCOUNT NAME]"

$VhdPath = "vhds/winpe-final.vhd"

$login = Login-AzureRmAccount -SubscriptionId $SubscriptionId
$storageAccount = Get-AzureRmStorageAccount | Where-Object StorageAccountName -EQ $StorageAccountName
$resourceGroupName = $storageAccount.ResourceGroupName

Add-AzureRmVhd -Destination "$($storageAccount.PrimaryEndpoints.Blob.ToString())$VhdPath" -LocalFilePath $LocalVhdPath -ResourceGroupName $resourceGroupName -OverWrite

  1. And finally, create an Azure Virtual Machine by using the VHD as a source image. The entire script resource that provides this functionality is listed in the Resources section. Following are the pertinent parts of the script.

$TargetDiskParent = "VHDs/"
$virtual machineName = "IMGWinPETest"
$virtual machine = New-AzureRmVIRTUAL MACHINEConfig -VIRTUAL MACHINEName $virtual machineName -VIRTUAL MACHINESize "Standard_D1"

$sourceImageUri = "$($storageAccount.PrimaryEndpoints.Blob.ToString())$VhdPath"
$osDiskUri = "$($storageAccount.PrimaryEndpoints.Blob.ToString())$TargetDiskParent$virtual machineName-OsDisk.vhd"
$virtual machine = Set-AzureRmVIRTUAL MACHINEOSDisk -VIRTUAL MACHINE $virtual machine -Name "$virtual machineName-OSDisk" -VhdUri $osDiskUri -CreateOption FromImage -SourceImageUri $sourceImageUri -Windows

WinPE interaction in Azure using DaRT

After running your virtual machine configuration script, the virtual machine provisioning process in Azure will start. To view the deployment, sign in to the Azure portal and find your new virtual machine. You can still view the virtual machine’s console by selecting the virtual machine, opening the Support & Troubleshooting section of the VIRTUAL MACHINE properties blade, and clicking Boot Diagnostics. A new blade will open with a screencap of the VIRTUAL MACHINE console.

However, to interact with the console, connect to an endpoint that is on the same virtual network as the Target virtual machine. Launch the Dart Remote Viewer utility from your workstation, and copy the connection info from the Inv32.xml file on Azure File storage to the connection dialog box.

Figure 2 DaRT Remote Connection Viewer

New Remote Connection dialog box

After you click Connect, you should be looking at your WinPE environment in Azure, and you can interact with it, even completing the MDT Deployment Wizard.

Figure 3 DaRT console interaction with WinPE in Azure

Remote Connection dialog box

Next steps

To improve this process, the first step would be to configure DaRT to attach to the public IP interface. This process removes the requirement that DaRT connections must come from the same virtual network. Additionally, this process can become even more automated by reading the XML file and prompting technicians for connection when a new connection file becomes available. Further, monitoring tools could be integrated to polish the connection process.

Resources

The two scripting resources referenced in this post are located at the Script Center:

Configure WinPE VHD for Azure VIRTUAL MACHINE Pre-boot Interaction

Deploy AzureRM VIRTUAL MACHINE with Accessible Pre-boot Environment

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. Also check out my Microsoft Operations Management Suite Blog.

Debugging PowerShell script in Visual Studio Code – Part 2

$
0
0

Welcome to Part 2 of the series about how to debug PowerShell in Visual Studio Code. In Part 1, we looked at the debugging features of Visual Studio Code with the PowerShell extension installed.  Now we will examine the various ways that you can start to debug PowerShell script with Visual Studio Code.

Single file debugging

With the 1.9 release of Visual Studio Code, you can now debug a PowerShell script with no debugger configuration required and no need to open a workspace. That is, you can open a single PowerShell file instead of a folder and still debug it.

In the following example, I open a single script file, set a breakpoint, and press F5. Debugging a PowerShell script is that simple in Visual Studio Code 1.9!

Note: The following image is animated. If you don’t see the animation, click the image to start the animation.

Debugging a PowerShell script in Visual Studio Code 1.9

When you open a file, Visual Studio operates in “no-folder workspace” mode as indicated by the purple status bar.  When you open a folder, you are in a regular workspace, and the status bar will be blue.

Workspace debugging

Workspace debugging occurs when you start a debug session after you have opened a folder via Open Folder… from the File menu.  When you open a folder, Visual Studio Code designates that folder as the workspace root. Typically, you open a folder that is the root of your Git repository and/or project folder.

Visual Studio Code stores debugger launch configurations, like the other configuration and settings files that we’ve encountered so far, in another JSON that‘s stored under the .vscode directory. The name of the JSON file that stores debugger launch configurations is launch.json.  If the workspace is controlled by a software configuration management tool such as Git, you will typically want to add the config files that are under .vscode to source control.

You may be wondering why you should bother with this launch.json file if you can debug without it. It turns out that having a launch configuration file can be quite handy because it allows you to:

  • Create a launch configuration that always starts a specific script with the specified arguments.
  • Create a launch configuration to launch whichever file is in the active editor window (like in ISE).
  • Create a launch configuration to attach to the interactive session, that is, Debug Console.
  • Create and select from multiple launch configurations.

Let’s look at an example of creating the PowerShell launch configurations for the debugger.  Select Open Folder… from the File menu to open a folder that contains one or more of your PowerShell scripts. Open the Debug view, and you will note that the Launch Configuration dropdown indicates that there are “No Configurations” as shown in the following screenshot:

Display of "No configurations"

Now click the gear icon that’s highlighted in the previous screenshot next to the dropdown. You will be see a list of available debuggers:

 

List of available debuggers

Your list may differ depending on the extensions that you have installed. Select PowerShell from the list. That will create the .vscode\launch.json file. This whole process is shown in the following screenshot:

Note: The following image is animated. If you don’t see the animation, click the image to start the animation.

 

Creating the .vscode\launch.json file

By default, the PowerShell extension initially configures your launch.json file with the three launch configurations that are in the following screenshot:

The three default configurations in launch.json

These launch configuration support the following debug scenarios:

  • PowerShell Launch (current file) – This is what we have been using. It launches the file in the active editor window under the debugger. This is how the PowerShell integrated scripting environment (ISE) debugger works.
  • PowerShell Attach to Host Process – This allows you to attach to another process that hosts the PowerShell engine and debug script that’s running in that process.
  • PowerShell Interactive Session – This attaches the debugger to the Debug Console session. This can be handy for importing your module and debugging it from the Debug Console prompt. This configuration can also be handy if you want to use the Set-PSBreakpoint command to set variable breakpoints, that is, when a variable is read or written. Setting this type of breakpoint is currently not supported by Visual Studio Code.

There is a fourth configuration that is not added by default:

  • PowerShell Script Configuration – Launches the file that’s specified in the configuration by using a path relative to the ${workspaceRoot} configuration variable.

Let’s add this fourth launch configuration to the launch.json file that we previously created. Click the Add Configuration button in the lower-right area of the editor window for launch.json. You will see the following list.  Your list may differ depending on the extensions that you have installed.

Adding a configuration

Select PowerShell: Launch Script Configuration from the list. By the way, one consequence of having this “Add Configuration” feature is that you may have a C# project configured to use the .NET Core debugger, yet you have a few PowerShell scripts in the project. You can debug those PowerShell scripts from the C# project by adding a PowerShell launch configuration to the C# project’s launch.json file.

The following animated GIF shows the whole process of adding a launch configuration that launches the PowerShell extension’s Examples\DebugTest.ps1 file with arguments:

Note: The following image is animated. If you don’t see the animation, click the image to start the animation.

Adding a launch configuration that launches the PowerShell extension’s Examples\DebugTest.ps1 file with arguments

As you can see, when you select the “Launch PowerShell (DebugTest.ps1)” launch configuration, starting the debugger will always start the DebugTest.ps1 file under the debugger with the specified arguments.

Launch configuration settings / variables

Most settings in the PowerShell launch configuration are pretty self-explanatory.

  • script is the path to the script to execute.
  • cwd is the current working directory that should be set for the debug session.
  • args is an array of arguments to pass to the script being debugged.

However, it usually works best to put all the arguments in a single string, for example:

{

"type": "PowerShell",
"request": "launch",
"name": "PowerShell Launch (MyScript.ps1)",
"script": "${workspaceRoot}/MyScript.ps1",
"args": [ "-Count 42 -DelayMillseconds 2000" ],
"cwd": "${workspaceRoot}"

},

Besides ${workspaceRoot} and ${file}, the following variables are available for you to use in launch.json:

  • ${workspaceRoot}: the path of the folder opened in Visual Studio Code
  • ${workspaceRootFolderName}: the name of the folder opened in Visual Studio Code without any solidus (/)
  • ${file}: the current opened file
  • ${relativeFile}: the current opened file relative to workspaceRoot
  • ${fileBasename}: the current opened file’s basename
  • ${fileBasenameNoExtension}: the current opened file’s basename with no file extension
  • ${fileDirname}: the current opened file’s dirname
  • ${fileExtname}: the current opened file’s extension
  • ${cwd}: the task runner’s current working directory on startup

You can also reference environment variables like ${env.USERPROFILE}.

Note: the env must be all lowercase and be careful to use env. instead of a env:. This is Visual Studio Code’s syntax, not PowerShell syntax.

PowerShell interactive session debugging

If you select the PowerShell Interactive Session launch configuration and start debugging, the debugger attaches to Visual Studio Code’s Debug Console, but it doesn’t run any script. In a future update to the PowerShell extension, the plan is to use a dedicated interactive console that runs as one of your terminal windows.

How is debugging an interactive session useful? Well, you might want to debug one of your module’s commands while using it from the console. Or you might want to debug in a remote session.

To do a debug a module command, select the PowerShell Interactive Session launch configuration, and press F5 to start debugging. In the Debug Console, execute the Import-Module command to import your module. Then, execute the module command that you want to debug. Of course, make sure that you’ve set a breakpoint in the appropriate place so that you can break into the debugger at the desired point of execution within the command. This process is illustrated in the following screenshot:

Note: The following image is animated. If you don’t see the animation, click the image to start the animation.

Doing a debug a module command

For remote session debugging, you also start debugging with the PowerShell Interactive Session launch configuration. After the debug session has started, type Enter-PSSession <hostname> to create a session on the remote computer. After connection, you can use psedit to open the remote script in Visual Studio Code where you can set breakpoints and step through the script.  This process is shown in the following screenshot:

Note: The following image is animated. If you don’t see the animation, click the image to start the animation.

Remote debugging

PowerShell attach to host process

With “attach to” support, you can attach the PowerShell debugger to another PowerShell host process. Where I’ve found this particularly useful is debugging PowerShell tab expansion script.

In this scenario, I need to the debug posh-git’s GitTabExpansionInternal method. I set a line breakpoint in this function, and then select the PowerShell Attach to Host Process launch configuration. When I start debugging, I see a selection list for PowerShell host processes that can be attached to:

List for PowerShell host processes that can be attached to

Note that this list displays the process ID (PID) and the host’s main window title bar text. I’ve updated my prompt function to display the current PowerShell process’s PID in the title bar.  This makes it easier to know which PowerShell console process I want to attach to. After you select a process, the debugger starts.

Now I can attempt tab completion of git commands like git push or<tab>, and the debugger will break on my breakpoint. This whole scenario is demonstrated in the following screenshot:

Note: The following image is animated. If you don’t see the animation, click the image to start the animation.

The "attach to" process

In this blog post, we looked at the debugging features of Visual Studio Code and the PowerShell extension. We also looked at how to configure the debugger to launch your scripts under the debugger, how to interactively debug local and remote scripts, and how to attach to PowerShell host processes.

I think you’ll find the PowerShell debugging experience in Visual Studio Code to be quite productive.  Of course, if you do find a bug, please be sure to submit an issue at https://github.com/PowerShell/vscode-powershell/issues so we can continue to improve the debug experience for everyone.

Keith Hill
Software Engineer
PowerShell MVP

 

Viewing all 117 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>