Google
 
Showing posts with label PowerShell. Show all posts
Showing posts with label PowerShell. Show all posts

Friday, January 27, 2023

PowerShell core compatibility: A lesson learned the hard way

PowerShell core is my preferred scripting language. I've been excited about it since its early days. Here's a tweet from back in 2016 when PowerShell core was still in beta:

 

I've used PowerShell to automate build steps, deployments, and other tasks on both dev environments and CICD pipelines. It's great to write a script on my Windows machine, test it using PowerShell core, and run it on my docker Linux-based build environments with 100% compatibility. Or so I thought until I learned otherwise!

A few years ago, I was automating a process which required creating a folder if it didn't exist. Out of laziness, this is how I implemented this functionality: 

mkdir $folder -f

When the folder exists and the -f (or --Force) flag is passed, the command will return the existing directory object without errors. I know this is not the cleanest way -more on this later- but it works on my Windows machine, so it should also work in the docker Linux container, except that it didn't. When the script ran, it resulted in this error:

/bin/mkdir: invalid option -- 'f'
Try '/bin/mkdir --help' for more information.

Why did the behavior differ? It turns out that mkdir means different things depending on whether you're running PowerShell on Windows or Linux. And this can be observed using Get-Command Cmdlet:

# Windows:
Get-Command mkdir

The output is:

CommandType     Name                                               Version
-----------     ----                                               -------
Function        mkdir

Under Windows, mkdir is a function, and the definition of this function can be obtained using

(Get-Command mkdir).Definition

And the output is:

<#
.FORWARDHELPTARGETNAME New-Item
.FORWARDHELPCATEGORY Cmdlet
#>

[CmdletBinding(DefaultParameterSetName='pathSet',
    SupportsShouldProcess=$true,
    SupportsTransactions=$true,
    ConfirmImpact='Medium')]
    [OutputType([System.IO.DirectoryInfo])]
param(
    [Parameter(ParameterSetName='nameSet', Position=0, ValueFromPipelineByPropertyName=$true)]
    [Parameter(ParameterSetName='pathSet', Mandatory=$true, Position=0, ValueFromPipelineByPropertyName=$true)]
    [System.String[]]
    ${Path},

    [Parameter(ParameterSetName='nameSet', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [AllowNull()]
    [AllowEmptyString()]
    [System.String]
    ${Name},

    [Parameter(ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
    [System.Object]
    ${Value},

    [Switch]
    ${Force},

    [Parameter(ValueFromPipelineByPropertyName=$true)]
    [System.Management.Automation.PSCredential]
    ${Credential}
)

begin {
    $wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('New-Item', [System.Management.Automation.CommandTypes]::Cmdlet)
    $scriptCmd = {& $wrappedCmd -Type Directory @PSBoundParameters }

    $steppablePipeline = $scriptCmd.GetSteppablePipeline()
    $steppablePipeline.Begin($PSCmdlet)
}

process {
    $steppablePipeline.Process($_)
}

end {
    $steppablePipeline.End()
}

Which as you can see, wraps the New-Item Cmdlet. However under Linux, it's a different story:

# Linux:
Get-Command mkdir

Output:

CommandType     Name                                               Version
-----------     ----                                               -------
Application     mkdir                                              0.0.0.0

It's an application, and the source of this applications can be retrieved as:

(Get-Command mkdir).Source
/bin/mkdir

Now that I know the problem, the solution is easy:

New-Item -ItemType Directory $folder -Force

It's generally recommended to use Cmdlets instead of aliases or any kind of shortcuts to improve readability and portability. Unfortunately PSScriptAnalyzer - which integrates well with VSCode- will highlight this issue in scripts but only for aliases (like ls) and not for functions. AvoidUsingCmdletAliases.

I learned my lesson. However, I did it the hard way.

Wednesday, November 9, 2016

Nano Server on AWS: Step by Step

Windows server 2016 comes in many flavors. Nano server is the new addition that is optimized to be lightweight and with smaller attack surface. It has much less memory and disk footprint and much faster boot time than Windows Core and the full windows server. These characteristics make Nano a perfect OS for the cloud and similar scenarios.
However, being a headless (no GUI) OS means that no RDP connection can be made to administer the server. Also since only the very core bits are included by default means that configuring the server features is a different story than what we have in the full windows server.
In this post I'll explain how to launch and connect to a Nano instance on AWS. And then use the package management features to install IIS.

Launching an EC2 Nano server instance:

  • In the AWS console under the EC2 section, click "Launch Instance"
  • Select the "Microsoft Windows Server 2016 Base Nano" AMI.


  • In the "Choose an Instance Type" page, select "t2.nano" instance type. This instance type has 0.5GB of RAM. Yes! this will be more than enough for this experiment.
  • Use the default VPC and use the default 8GB storage.
  • In the "Configure Security Group" page things will start to be a bit different from the usual full windows server. Create a new security group and select these two inbound rules: 
    • WinRM-HTTP: Port 5985. This will be used for the remote administration.
    • HTTP: Port 80. To test IIS from our local browser.

  • Note that AWS console gives a warning regarding port 3389 which is used for RDP. We can safely ignore this rule as we'll use WinRM. RDP is not an option with Nano server.
  • Continue as usual and use an existing key pair or let AWS generate a new key pair to be used for windows password retrieval.

 

Connecting to the Nano server instance:

After the instance status becomes "running" and all status checks pass, observe the public IP of the instance. To manage this server, we'll use WinRM (Windows Remote Management) over HTTP. To be able to connect the machine, we need to add it to the trusted hosts as follows:
  • Open PowerShell in administrator mode
  • Enter the following commands to add the server : (assuming the public IP is 52.59.253.247)
$ip = "52.59.253.247"
Set-Item WSMan:\localhost\Client\TrustedHosts "$ip" -Concatenate -Force

Now we're ready to connect to the Nano server:
Enter-PSSession
-ComputerName $ip -Credential "~\Administrator"


PowerShell will ask for the password which you can retrieve from AWS console using the "Get Windows Password" menu option and uploading your key pair you saved on your local machine.

If everything goes well, all PowerShell commands you'll enter from now on will be executed on the remote server. So now let's reset the administrator password for the Nano instance:
$pass = ConvertTo-SecureString -String "MyNewPass" -AsPlainText -Force
Set-LocalUser -Name Administrator -Password $pass
Exit 

This will change the password and disconnect. To connect again, we can use the following commands and use the new password:
$session = New-PSSession -ComputerName $ip -Credential "~\Administrator"
Enter-PSSession $session



Installing IIS:

As Nano is a "Just Enough" OS. Feature binaries are not included by default. We'll use external package repositories to install other features like IIS, Containers, Clustering, etc. This is very similar to apt-get and yum tools in the Linux world and the windows alternative is OneGet. The NanoServerPackage repository has instructions regarding adding the Nano server package source which depends on the Nano server version. We know that the AWS AMI is based on the released version, but it doesn't harm to do a quick check:
Get-CimInstance win32_operatingsystem | Select-Object Version

The version in my case is 10.0.14393. So to install the provider, we'll run the following:
Save-Module -Path "$env:programfiles\WindowsPowerShell\Modules\" -Name NanoServerPackage -minimumVersion 1.0.1.0
Import-PackageProvider NanoServerPackage

Now let's explore the available packages using:
Find-NanoServerPackage
or the more generic command:
Find-Package -ProviderName NanoServerPackage


We'll find the highlighted IIS package. So let's install it and start the required services:
Install-Package -ProviderName NanoServerPackage -Name Microsoft-NanoServer-IIS-Package
Start-Service WAS
Start-Service W3SVC


Now let's point our browser to the IP address of the server. And here is our beloved IIS default page:


Uploading a basic HTML page:

Just for fun, create a basic HTML page on your local machine using your favorite tool and let's upload it and try accessing it. First enter the exit command to exit the remote management session and get back to the local computer. Note that in a previous step, we had the result of the New-PSSession in the $session variable so we'll use it to copy the HTML page to the remote server over the management session:
Copy-Item "C:\start.html"  -ToSession $session -Destination C:\inetpub\wwwroot\

Navigate to http://nanoserverip/start.html to verify the successful copy of the file.


Conclusion:

Nano server is a huge step forward to enable higher density of infrastructure and applications especially on the cloud. However it requires adopting a new mindset and a set of tools to get the best of it.
In this post I just scratched the surface of using Nano Server on AWS. In future posts we'll explore deploying applications on it to get real benefits.

Friday, May 24, 2013

Output to multiple destinations in PowerShell

Sometimes you need to output a result of a Cmdlet execution or a variable to screen and a file, for example if you want to log all operations in a script in addition to showing the output to the user.
Calling both Out-Host and Out-File for each operation is clearly not a good option.
The nice Tee-Object Cmdlet can perform this functionality, but till before version 3.0, it cannot append output to an existing file. So be it, I have to code it:


function Out-All([string]$FilePath, [switch]$Append)
{
    Begin
    {
        if($Append -eq $False)
        {
            New-Item -Path $FilePath -ItemType File -Force
        }
    }
    Process
    {
        $_ | Out-Host
        $_ | Out-File -FilePath $FilePath -Append
    }
}



The above function writes the pipline input to both a file using Out-File and screen (or whatever taking the output) using Out-Host. The advantage is that it has the Append  switch.
The Begin block which executes before processing the pipline data checks  the Append switch and creates a new file (or not) accordingly.
The Process block is responsible for the actual writing of data.

Sample use:

(1..100) | Out-All -FilePath "C:\log\data.log"

(1..100) | Out-All -FilePath "C:\log\data.log" -Append

dir | Out-All -FilePath "C:\log\data.log" -Append

Saturday, July 9, 2011

PowerShell 32 and 64 bit have different execution policy settings

I use PowerShell to automate many repetitive tasks. And build automation is one of the areas I like most.
I faced a stiuation when I get this error with a PowerShell script running in visual studio project post build event:
File XXX.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about _signing" for more details.
I know this error is usually caused by an execution policy that denies execution of scripts. So I made sure the Execution Policy is set to RemoteSigned. But This did not work!!

I added this to the batch file that calls the PowerShell script:

powershell "Get-ExecutionPolicy -List"

And the result was:
Scope                         ExecutionPolicy

----- ---------------

MachinePolicy Undefined

UserPolicy Undefined

Process Undefined

CurrentUser Undefined

LocalMachine Undefined

After some research I found that since the machine is 64-bit, there were 2 versions of PowerShell, 32 and 64-bit. Again I edited the batch file adding :

Powershell.exe "Get-Variable PSHOME"

And ran a build from visual studio, the result was:

Name Value
---- -----
PSHOME C:\Windows\SysWOW64\WindowsPowerShell\v1.0

This shows that the version invoked was the 32-bit version, while the version I used to Set-ExecutionPolicy was the 64-bit version. I determined the paths from start menu shortcuts to Powershell:

Windows PowerShell (x86): %SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe
And Windows PowerShell: %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe

So I opened Windows PowerShell (x86) and executed:

Set-ExecutionPolicy RemoteSigned

And it worked.

So, did Visual Studio post build event call the 32-bit version because it's a 32-bit application?

Thursday, April 1, 2010

Splitting csv file based on content in one line using PowerShell

Problem:
you have a csv file that contains department employees in a format like this:
Department,Employee
Sales,emp1
HR,emp2
Sales,emp3
Finance,emp4
Finance,emp5
Security,emp6
Security,emp7
Security,emp8
HR,emp9
And you need to split this file contents to separate files based on department name. So for the above example, we should get four files, Sales.csv, HR.csv, Finance.csv, and Security.csv. Each file contains only it's employees.
And the solution is really shows the power of PowerShell pipelining:

Import-Csv file.csv | Group-Object -Property "department" | Foreach-Object {$path=$_.name+".csv" ; $_.group | Export-Csv -Path $path -NoTypeInformation}

Dissecting the above commands:
Import-Csv file.csv:
Parses the csv file and returns an array of objects.

| Group-Object -Property "department":
Since we need to split by department, it makes sense to group objects by the department property.

| Foreach-Object {...}:
We need to apply an action for each group (department). So we pipeline the resulted groups to Foreach-Object

$path=$_.name+".csv":
Within the foreach, we need to create a temporary variable ($path) to be passed to the next pipeline responsible for the actual saving. Note that I use the semicolon ";" to separate this part from the next. And I used the name property of the group (which maps to department name in our case) to format the file name.

$_.group | Export-Csv -Path $path -NoTypeInformation
:
Then for each group we have, we need to export its contents (csv file rows) to the file path created in the past step. So we again pipeline the group property of the group item (which is an ArrayList of original objects) to the Export-CSV Cmdlt.

And the result should be files like:
Finance.csv:
"Department","Employee"
"Finance","emp4"
"Finance","emp5"

Saturday, July 11, 2009

Using PowerShell and SMO to change database columns collations

Changing a SQL serve database columns collations manually can be a tedious task. I have a database that I want to change the collation of all its non system columns to "Arabic_CS_AS".
Here is a PowerShell script that uses SQL Server Management Objects (SMO) to do this task:
(note that I load the assemblies with version 10.0.0 which is the version of SQL server 2008 I have installed on my system)


[System.Reflection.Assembly]::Load("Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91")
[System.Reflection.Assembly]::Load("Microsoft.SqlServer.ConnectionInfo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91")

$con = New-Object Microsoft.SqlServer.Management.Common.ServerConnection

$con.ConnectionString="Data Source=.\SQLExpress;Integrated Security=SSPI;"

$con.Connect()

$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $con
$db = $srv.Databases["test"]

foreach ($table in $db.Tables)
{
if($table.IsSystemObject)
{
continue
}

foreach($column in $table.Columns)
{
if(-not ([string]::IsNullOrEmpty($column.Collation)))
{
$column.Collation = "Arabic_CS_AS"
$column.Alter()
}
}
}

$con.Disconnect()

Saturday, May 2, 2009

Downloading files using Windows PowerShell with progress information

Downloading a file using PowerShell is very easy, you just need to call WebClient.DownloadFile method from the .net framework. But let's make things more interesting.

I need a script to download a list of files whose URLs are specified in a file to local folder. And the script should check if the file already exists. If it is, it should skip to the next file. To give a better user experience, a (progress bar) will be displayed to notify the user about the progress.
Error handling and reporting is important. So we'll take care of it inside the script.

Let's start analyzing how to accomplish this.

The code:
The first line of code defines the script parameters which are inputFile: the path of the file that contains the list of URLs and folder where we'll download files to.

param ([string] $inputFile,[string]$folder)

We make some basic input validation to check if parameters are set:

trap {Write-Host "Error: $_" -Foregroundcolor Red -BackGroundColor Black;exit}

if([string]::IsNullOrEmpty($folder))
{
throw "folder parameter not set";
}

if([string]::IsNullOrEmpty($inputFile))
{
throw "inputFile parameter not set";
}


The above code starts with defining an error handler that will run when a stopping error occurs in the script (in the current scope). It simply say: When an error occurs, write it to the host and exit the script.
Note that I pass Foregroundcolor and BackGroundColor parameter to the Write-Host Cmdlet so the user gets the same experience he gets with other PowerShell errors.
The next validation code simply check if the parameters are passed. If not, an error is thrown.

Next, we read the contents of the input file:

$files = Get-Content $inputFile -ErrorAction Stop

I use the Get-Content Cmdlet specifying the ErrorAction parameter = Stop. ErrorAction is a common parameter for PowerShell Cmdlets. The Stop value asks PowerShell to consider any errors fatal errors that will stop the script. But as I defined the trap handler as shown above, the same error handling will apply. The error message will be displayed and the script will exit.

The next line creates a WebClient object to be used in the download porcess:

$web = New-Object System.Net.WebClient

Next, I initialize a counter to be used in progress display based on the number of files downloaded so far. The a for loop is used to iterate on files.
Note that I define another trap handler that will act whenever an error is thrown within the for loop scope. It calls a specific function that handles download errors then continues the loop:

foreach($file in $files)
{
trap {ErrorHandler($_);continue}
.
.
.
}


Inside the loop, I create the download file path and display the progress using Write-Progress Cmdlet.

$path = [IO.Path]::Combine($folder,$file.SubString($file.LastIndexOf("/")+1));

Write-Progress -Activity "downloading" -Status $path -PercentComplete (($i / $files.Length)*100)


And if the file does not exit, DownloadFile is called.

if([System.IO.File]::Exists($path) -eq $False )
{
$web.DownloadFile($file,$path)
}


I used the script to download presentations from the MIX 2009 conference. The attached ZIP file includes both the PowerShell script and the file that contains the URLs)




Complete code listing:

param ([string] $inputFile,[string]$folder)

function ErrorHandler($error)
{
Write-Host "Error while downloading file:$file" -Foregroundcolor Red -BackGroundColor Black
Write-Host $error -Foregroundcolor Red -BackGroundColor Black
Write-Host ""
}

trap {Write-Host "Error: $_" -Foregroundcolor Red -BackGroundColor Black;exit}

if([string]::IsNullOrEmpty($folder))
{
throw "folder parameter not set";
}

if([string]::IsNullOrEmpty($inputFile))
{
throw "inputFile parameter not set";
}


$files = Get-Content $inputFile -ErrorAction Stop

$web = New-Object System.Net.WebClient
$i = 0
foreach($file in $files)
{
trap {ErrorHandler($_);continue}

$path = [IO.Path]::Combine($folder,$file.SubString($file.LastIndexOf("/")+1));

Write-Progress -Activity "downloading" -Status $path -PercentComplete (($i / $files.Length)*100)



if([System.IO.File]::Exists($path) -eq $False )
{
$web.DownloadFile($file,$path)
}

$i = $i+1
}

Saturday, April 18, 2009

Calling a PowerShell script in a path with a white space from command line

I stuck in this problem once, so here is a solution in case you face it.
First, how to call a script from PowerShell console when the script file path contains white space? because executing this:
PS C:\> c:\new folder\myscript.ps1 param1
will give this error:
The term 'c:\new' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.

Putting the path between quotations like this:
PS C:\> "c:\new folder\myscript.ps1" param1
Will lead to:
Unexpected token 'param1' in expression or statement.

And the solution is to use the Invoke Operator "&", which is used to run script blocks
PS C:\> & 'c:\new folder\myscript.ps1' param1

So farm so good. Now coming to the next part which is calling this from command line.
Executing a PowerShell script from command line is as easy as:
C:\Documents and Settings\Hesham>powershell c:\MyScript.ps1 param1

This is fine as long as the script path has no spaces. For example, executing:
C:\Documents and Settings\Hesham>powershell c:\new folder\MyScript.ps1 param1
Again gives:
The term 'c:\new' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.


With the help of PowerShell -?, here is a solution:
C:\Documents and Settings\Hesham>powershell -command "& {&'c:\new folder\MyScript.ps1' param1}"

Tada!!

Saturday, March 7, 2009

Why should you learn PowerShell?

Whether you are a software developer, a tester, a system administrator, or even a regular user, PowerShell has something for you to offer.

It's amazing capabilities open many possibilities for you and it can be used in several scenarios:
  • For system administrators: quick and easy way to deal with the system in a consistent way. You'll have the power of many built-in commands, the .net CLR. Using it, you can perform several tasks like managing the file system and permissions, monitor even log, work with Active Directory. And so much more.
  • As a Developer: you can use PowerShell commands to automate some systems like exchange server from your application. Most Microsoft server products support or will support PowerShell as a programmable interface to automate the product. You can also use it to check environment issues if you have a production or testing environment issues that you suspect it's root cause to be environment related.
  • As a tester: PowerShell can be used for testing automation. With its easy to use commands and the simple syntax, I believe it's very suitable for this purpose. Have a look at: Why Should I Test With PowerShell?

If you want to get a high level image about what PowerShell can do for you and the flexibility it provides, you can watch this video by Jeffrey Snover the architect of PowerShell:

Monday, February 2, 2009

How to know Active Directory attribute names

When dealing programmaically with active directory objects using .net code, VBScript or PowerShell, you need to set values of attribues you find in the "Active Directory Users and Computers" Snap-in (run dsa.msc). But these names are not always the same as the names used when setting attribute values in code. So how to know the attribute names?

I stumbled upon a nice msdn page that has the Mappings for the Active Directory Users and Computers Snap-in. It has has links to object type specific UI labels to attribute names.
For example, the User Object User Interface Mapping page shows that the Office UI label maps to physicalDeliveryOfficeName. How could you guess it?

Friday, January 30, 2009

Working with Active Directory using PowerShell

Working with Active Directory is one of the important administrative tasks. VBSctipt was the most used language for administartors to automate repetitive tasks.
Now, windows PowerShell is the future, so it's important to know how to use it to work with Active Directory.
I'll provide a simple example that should clarify some concepts. In this scenario, it's required to set the email attribute of all users under a certain OU (Organaizational Unit) in the format: sAMAccountname@domainname.com and output the results to a text file.
PowerShell 1.0 does not have specific built in Cmdlets to handle active directory objects. But it has a basic support for [ADSI]. This will not limit us as we still can use the .net class library easly in PowerShell.
Here is how the code works:


  • First we decclare a variable that holds the output file path:
    $filePath = "c:\MyFile.txt"

  • Then, create the root directory entry which represents the OU that we need to modify users under it. Note the LDAP: it tells: get the OU named "MyOU" from the domain "win2008.demo"
    $rootOU=[ADSI]"LDAP://ou=MyOU,dc=win2008,dc=demo"

  • We need to get all users under this OU, so we create a .net directory searcher instance using New-Object Cmdlet
    $searcher= New-Object System.DirectoryServices.DirectorySearcher

  • Setting the root of the search to the OU and the filter to find users only. and start to find all objects that match the filter:
    $searcher.searchroot=$rootOU
    $searcher.Filter = "objectclass=user"
    $res=$searcher.FindAll()

  • Initializing the output file by writing the string "Emails"
    "Emails:" Out-File -FilePath $filePath

  • Iterating on the results:
    foreach($u in $res)

  • Getting the user object and setting the mail attribute, and committing:
    $user = $u.GetDirectoryEntry()
    $name=$user.sAMAccountname
    $user.mail="$name@win2008.demo"
    $user.SetInfo()

  • Appending the mail to the output file (note the append parameter):
    $user.mail Out-File -FilePath $filePath -append

You can save these commands to a .ps1 file and execute from PowerShell, for example:
c:\filename.ps1
note that you need to execute Set-ExecutionPolicy RemoteSigned first.

And here is the complete code listing, note that no error checking or handling is included for simplicity.


#Set-ExecutionPolicy RemoteSigned

$filePath = "c:\MyFile.txt"

$rootOU=[ADSI]"LDAP://ou=MyOU,dc=win2008,dc=demo"

$searcher= New-Object System.DirectoryServices.DirectorySearcher

$searcher.searchroot=$rootOU
$searcher.Filter = "objectclass=user"

$res=$searcher.FindAll()

"Emails:" Out-File -FilePath $filePath
foreach($u in $res)
{
$user = $u.GetDirectoryEntry()

$name=$user.sAMAccountname
$user.mail="$name@win2008.demo"
$user.SetInfo()

$user.mail Out-File -FilePath $filePath -append

$user.psbase.Dispose()


}
$rootOU.psbase.Dispose()
$res.Dispose()
$searcher.Dispose()

Friday, December 5, 2008

Handling errors when calling PowerShell using C#

In a previous post, I explained how to call PowerShell Commands from C# code in an easy way. But what about error handling?
First, we should know about the types of errors that need to be handled:
  • Exceptions due to error in syntax of the command or script.
  • Errors generated from commands themselves due to logical errors or invalid parameters.
Handling Exceptions:
Several exceptions can be thrown. The base for PowerShell exceptions is the System.Management.Automation.RuntimeException class. Exceptions can originate from bad syntax or calling invalid commands.
Handling this kind of errors follows the common exception handling in .net as this example shows:


try
{
using (RunspaceInvoke invoke = new RunspaceInvoke())
{
result = invoke.Invoke("dir " + "c:\\" + " -recurse -Filter *.exe");
}
}
catch (System.Management.Automation.RuntimeException ex)
{
Console.WriteLine(ex.Message);
//Specific handling for PowerShell errors
}
catch (Exception ex)
{
//general handling and logging
}

Command Errors:
These errors that commands return. For example, when you run this command in PowerShell while you don't have a Q drive:
dir Q:
You'll get this error:
Get-ChildItem : Cannot find drive. A drive with name 'q' does not exist.
At line:1 char:4
Which makes sense. But how to check for this kind of errors?
The Invoke method of the RunspaceInvoke class has an overload that accepts an IList as the 3rd parameter. This out parameter will contain a list of errors that has occurred.


IList errors;
using (RunspaceInvoke invoke = new RunspaceInvoke())
{
result = invoke.Invoke("dir " + "Q:\\" + " -recurse -Filter *.exe", null, out errors);
}

if (errors.Count > 0)
{
PSObject error = errors[0] as PSObject;
if (error != null)
{
ErrorRecord record = error.BaseObject as ErrorRecord;
Console.WriteLine(record.Exception.Message);
Console.WriteLine(record.FullyQualifiedErrorId);
}

return;
}

In the above code, I cast the error to PSObject and get its BaseObject and cast it to ErrorRecord which contains the error information.

An interesting part here, is that you can check the FullyQualifiedErrorId property which is a string to distinguish errors and create logic to handle specific errors. ErrorDetails property can also be checked. But be careful because it can be null.

I hope this post can make your programming with PowerShell easier.

Friday, September 5, 2008

Calling PowerShell Commands from C# code

Windows PowerShell is a solution to what the windows platform lacked, a really powerful shell !! But it does not stop at this point. Calling PowerShell commands (Cmdlets) from .net code, using the return values can add real power to you application.

In this post, I'll explian how to gain this power in a simple C# application to recursively search the file system and getting the results. The same technique can be used in many other scenarios.

Before starting to code:
First you need to install Windows PowerShell for your windows version. So you need to check How to Get Windows PowerShell 1.0

Next, add a reference to System.Management.Automation.dll to the application, It can be found under C:\Program Files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0 in case of windows XP installation.

After adding this using statement, you'll be ready to code:
using System.Management.Automation;

Starting simple : Getting a list of files and folders recursively:
There are many ways to integrate with PowerShell, here, I'll use a relatively simple way which is the RunspaceInvoke class.

first, let's examine this command in PowerShell:
dir C:\WINDOWS -Recurse
Executing this command in PowerShell will get all files and folders under C:\windows recursively. By the way, dir is an alias to the command Get-ChildItem, not a command itself.

This code creates a RunspaceInvoke instance and invokes dir:

System.Collections.ObjectModel.Collection<psobject> result = null;
using (RunspaceInvoke invoke = new RunspaceInvoke())
{
result = invoke.Invoke("dir " + "D:\\Codes\\C#\\PSDir" + " -recurse");
}

Note that the Invoke method returns a generic collection of PSObjects. We'll use this collection to retrieve the results. This is one of the real powerful features of PowerShell, it talks objects not text as the old command line commands.

But what can we get from the Collection of PSObjects? PSObject class contains the ImmediateBaseObject property that contains the actual object returned by the command. In the case of dir, we expect that this command will return objects of types: System.IO.DirectoryInfo and System.IO.FileInfo. We'll see later how can we discover this. But for now, lets get the results:

foreach (PSObject item in result)
{
if (item.ImmediateBaseObject is System.IO.DirectoryInfo)
{
System.IO.DirectoryInfo info = item.ImmediateBaseObject as System.IO.DirectoryInfo;
Console.WriteLine("Directory:" + info.FullName);
}
else if (item.ImmediateBaseObject is System.IO.FileInfo)
{
System.IO.FileInfo info = item.ImmediateBaseObject as System.IO.FileInfo;
Console.WriteLine("File:" + info.FullName);
}
}

As you see in the above code, I check the type of the result and display the appropriate value based on it. The result will be something like this:




A more useful example: Searching the file system:
You can use dir to search for all files of a specific type under a specific path using the filter parameter. This is a complete code listing:

System.Collections.ObjectModel.Collection result = null;
using (RunspaceInvoke invoke = new RunspaceInvoke())
{
result = invoke.Invoke("dir " + "D:\\Codes\\C#\\PSDir" + " -recurse -Filter *.exe");
}

foreach (PSObject item in result)
{
if (item.ImmediateBaseObject is System.IO.DirectoryInfo)
{
System.IO.DirectoryInfo info = item.ImmediateBaseObject as System.IO.DirectoryInfo;
Console.WriteLine("Directory:" + info.FullName);
}
else if (item.ImmediateBaseObject is System.IO.FileInfo)
{
System.IO.FileInfo info = item.ImmediateBaseObject as System.IO.FileInfo;
Console.WriteLine("File:" + info.FullName);
}
}

Console.ReadKey();

Exploring PowerShell commands results:
In the above code, I expect the result to be .net types System.IO.DirectoryInfo and System.IO.FileInfo. But how could one detect the type?
PowerShell is very discoverable. for example, executing these statements in PowerShell lets us know the return type (Note that results depend on your file system):
$result = dir c:\
$result[0].GetType().Name

this returns:
DirectoryInfo
Trying another index like 9:
$result[9].GetType().Name
returns:
FileInfo

In this post, we made use of PowerShell commands and manipulated their results. In another post, I'll explain how to check for errors that may occur when running PowerShell commands and how to handle them.