Google
 

Thursday, November 12, 2009

Getting started with SharePoint 2010

SharePoint 2010 is coming with a lot of new features and if you are seriously interested in SharePoint, you should start build your 2010's knowledge. My colleague Mourad Askar posted on his blog about SharePoint 2010 Information and Tutorials which I find very informative.
On Microsoft's SharePoint 2010 site, you can find tutorials, videos and you can register for the beta.

One important thing, SharePoint 2010 will no run on 32Bit machines. If you have a core 2 due processor, it supports 64 bit. And if in doubt and want to run SharePoint on a virtual machine, check VMWare's tool that tests if your processor is capable of running a 64bit guest OS.
http://download3.vmware.com/software/wkst/VMware-guest64check-5.5.0-18463.exe

Personally, I have a core 2 Due Laptop with Vista 32 bit installed. I installed the 64 bit edition of Linux Ubuntu 9.04 and can host Windows 2008 64bit OS using Virtual Box. What's good about this scenario is that Ubuntu's memory usage is light and gives a good space to host virtual machines.

Saturday, July 11, 2009

Using PowerShell and SMO to change database columns collations

Changing a SQL serve database columns collations manually can be a tedious task. I have a database that I want to change the collation of all its non system columns to "Arabic_CS_AS".
Here is a PowerShell script that uses SQL Server Management Objects (SMO) to do this task:
(note that I load the assemblies with version 10.0.0 which is the version of SQL server 2008 I have installed on my system)


[System.Reflection.Assembly]::Load("Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91")
[System.Reflection.Assembly]::Load("Microsoft.SqlServer.ConnectionInfo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91")

$con = New-Object Microsoft.SqlServer.Management.Common.ServerConnection

$con.ConnectionString="Data Source=.\SQLExpress;Integrated Security=SSPI;"

$con.Connect()

$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $con
$db = $srv.Databases["test"]

foreach ($table in $db.Tables)
{
if($table.IsSystemObject)
{
continue
}

foreach($column in $table.Columns)
{
if(-not ([string]::IsNullOrEmpty($column.Collation)))
{
$column.Collation = "Arabic_CS_AS"
$column.Alter()
}
}
}

$con.Disconnect()

Thursday, July 2, 2009

Not calling Dispose can cause InvalidComObjectException

During load testing an application that performs thousands of operations against Active Directory and under high load conditions. The process stopped working and our logs showed this error:

System.Runtime.InteropServices.InvalidComObjectException: COM object that has been
separated from its underlying RCW cannot be used.

Searching for this error, most answers on forums referred to trying to access a COM object from a thread other than the thread that created it. We use multithreading, but we did not use objects across threads.

Logs pointed us the location of the code where we should investigate. I made a review on a method that was called thousands of times and creates DirectoryEntry instances. The DirectoryEntry was not disposed!!

We were not sure that this can cause the above exception, but it was a bug and it needed to be fixed anyway. We fixed it and reapplied the scenarios that caused this exception, and it disappeared.

Other that understanding a new reason for that mysterious exception, there are some useful lessons:
  • Proper logging can help identifying errors quickly.
  • Failing to dispose disposable objects causes performance penalties that some developers underestimate their effect. It can make your application stop working!!
  • Early code review is important to spot these kinds of errors.
  • Test your application under real-life conditions.
  • Using memory profilers and dispose trackers is worth trying in some cases.

Wednesday, May 13, 2009

More great things about Visual Web Developer 2008 Express

In case you have not installed it yet: In addition to the great features Visual Web Developer 2008 Express has. SP1 is even much better!!

It now has more project tamplates. The most important one is the web application template (I hate the Website template).
You can also add class library project to the same solution!!


Download and enjoy!!

Saturday, May 2, 2009

Downloading files using Windows PowerShell with progress information

Downloading a file using PowerShell is very easy, you just need to call WebClient.DownloadFile method from the .net framework. But let's make things more interesting.

I need a script to download a list of files whose URLs are specified in a file to local folder. And the script should check if the file already exists. If it is, it should skip to the next file. To give a better user experience, a (progress bar) will be displayed to notify the user about the progress.
Error handling and reporting is important. So we'll take care of it inside the script.

Let's start analyzing how to accomplish this.

The code:
The first line of code defines the script parameters which are inputFile: the path of the file that contains the list of URLs and folder where we'll download files to.

param ([string] $inputFile,[string]$folder)

We make some basic input validation to check if parameters are set:

trap {Write-Host "Error: $_" -Foregroundcolor Red -BackGroundColor Black;exit}

if([string]::IsNullOrEmpty($folder))
{
throw "folder parameter not set";
}

if([string]::IsNullOrEmpty($inputFile))
{
throw "inputFile parameter not set";
}


The above code starts with defining an error handler that will run when a stopping error occurs in the script (in the current scope). It simply say: When an error occurs, write it to the host and exit the script.
Note that I pass Foregroundcolor and BackGroundColor parameter to the Write-Host Cmdlet so the user gets the same experience he gets with other PowerShell errors.
The next validation code simply check if the parameters are passed. If not, an error is thrown.

Next, we read the contents of the input file:

$files = Get-Content $inputFile -ErrorAction Stop

I use the Get-Content Cmdlet specifying the ErrorAction parameter = Stop. ErrorAction is a common parameter for PowerShell Cmdlets. The Stop value asks PowerShell to consider any errors fatal errors that will stop the script. But as I defined the trap handler as shown above, the same error handling will apply. The error message will be displayed and the script will exit.

The next line creates a WebClient object to be used in the download porcess:

$web = New-Object System.Net.WebClient

Next, I initialize a counter to be used in progress display based on the number of files downloaded so far. The a for loop is used to iterate on files.
Note that I define another trap handler that will act whenever an error is thrown within the for loop scope. It calls a specific function that handles download errors then continues the loop:

foreach($file in $files)
{
trap {ErrorHandler($_);continue}
.
.
.
}


Inside the loop, I create the download file path and display the progress using Write-Progress Cmdlet.

$path = [IO.Path]::Combine($folder,$file.SubString($file.LastIndexOf("/")+1));

Write-Progress -Activity "downloading" -Status $path -PercentComplete (($i / $files.Length)*100)


And if the file does not exit, DownloadFile is called.

if([System.IO.File]::Exists($path) -eq $False )
{
$web.DownloadFile($file,$path)
}


I used the script to download presentations from the MIX 2009 conference. The attached ZIP file includes both the PowerShell script and the file that contains the URLs)




Complete code listing:

param ([string] $inputFile,[string]$folder)

function ErrorHandler($error)
{
Write-Host "Error while downloading file:$file" -Foregroundcolor Red -BackGroundColor Black
Write-Host $error -Foregroundcolor Red -BackGroundColor Black
Write-Host ""
}

trap {Write-Host "Error: $_" -Foregroundcolor Red -BackGroundColor Black;exit}

if([string]::IsNullOrEmpty($folder))
{
throw "folder parameter not set";
}

if([string]::IsNullOrEmpty($inputFile))
{
throw "inputFile parameter not set";
}


$files = Get-Content $inputFile -ErrorAction Stop

$web = New-Object System.Net.WebClient
$i = 0
foreach($file in $files)
{
trap {ErrorHandler($_);continue}

$path = [IO.Path]::Combine($folder,$file.SubString($file.LastIndexOf("/")+1));

Write-Progress -Activity "downloading" -Status $path -PercentComplete (($i / $files.Length)*100)



if([System.IO.File]::Exists($path) -eq $False )
{
$web.DownloadFile($file,$path)
}

$i = $i+1
}

Wednesday, April 22, 2009

Oracle buys Sun !!

OK, I think you already know this. But it's not the news I'm talking about. I'm talking about the implications.

I did not use Java technology a lot, I admire it, but I cannot consider myself as a Java developer. However, having Java technology in our world and it's competition with Microsoft's .net framework is a healthy thing. It gives us options.
Some Java developers were worried after news about the deal has spread. I think that Oracle + Java has been and will stay a good choice to build enterprise applications. Maybe the future will carry news about better and better RIA applications built with Java platform.




What I really care about and made me worried is the future of MySQL, which is the most popular open source database that has a free community edition.
How is this related to MySQL? Here is the story:

MySQL has a pluggable database engine architecture. So it has many database engines to choose from, and you deal with them all using the same SQL dialect.
One famous engine is the InnoDB engine, which is transactional and high performance database engine that was widely used by MySQL users. InnoDB was developed by Innobase, a Finnish company.
Oracle acquired Innobase, which caused worries about the future of the transactional engine. Sun bought MySQL. And MySQL released the Falcon storage engine.



Now, what is the future of MySQL under the control of Oracle, the database giant? The problem is not only that Oracle may kill MySQL gradually (I think it won't do it directly). But is that MySQL engineers started to leave Sun after Oracle's deal. (more about this in this article).

So. Big fishes eat small fishes. It's the turn of open source advocates and MySQL users to have their word.

Saturday, April 18, 2009

Calling a PowerShell script in a path with a white space from command line

I stuck in this problem once, so here is a solution in case you face it.
First, how to call a script from PowerShell console when the script file path contains white space? because executing this:
PS C:\> c:\new folder\myscript.ps1 param1
will give this error:
The term 'c:\new' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.

Putting the path between quotations like this:
PS C:\> "c:\new folder\myscript.ps1" param1
Will lead to:
Unexpected token 'param1' in expression or statement.

And the solution is to use the Invoke Operator "&", which is used to run script blocks
PS C:\> & 'c:\new folder\myscript.ps1' param1

So farm so good. Now coming to the next part which is calling this from command line.
Executing a PowerShell script from command line is as easy as:
C:\Documents and Settings\Hesham>powershell c:\MyScript.ps1 param1

This is fine as long as the script path has no spaces. For example, executing:
C:\Documents and Settings\Hesham>powershell c:\new folder\MyScript.ps1 param1
Again gives:
The term 'c:\new' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.


With the help of PowerShell -?, here is a solution:
C:\Documents and Settings\Hesham>powershell -command "& {&'c:\new folder\MyScript.ps1' param1}"

Tada!!

Saturday, March 7, 2009

Why should you learn PowerShell?

Whether you are a software developer, a tester, a system administrator, or even a regular user, PowerShell has something for you to offer.

It's amazing capabilities open many possibilities for you and it can be used in several scenarios:
  • For system administrators: quick and easy way to deal with the system in a consistent way. You'll have the power of many built-in commands, the .net CLR. Using it, you can perform several tasks like managing the file system and permissions, monitor even log, work with Active Directory. And so much more.
  • As a Developer: you can use PowerShell commands to automate some systems like exchange server from your application. Most Microsoft server products support or will support PowerShell as a programmable interface to automate the product. You can also use it to check environment issues if you have a production or testing environment issues that you suspect it's root cause to be environment related.
  • As a tester: PowerShell can be used for testing automation. With its easy to use commands and the simple syntax, I believe it's very suitable for this purpose. Have a look at: Why Should I Test With PowerShell?

If you want to get a high level image about what PowerShell can do for you and the flexibility it provides, you can watch this video by Jeffrey Snover the architect of PowerShell:

Monday, February 2, 2009

How to know Active Directory attribute names

When dealing programmaically with active directory objects using .net code, VBScript or PowerShell, you need to set values of attribues you find in the "Active Directory Users and Computers" Snap-in (run dsa.msc). But these names are not always the same as the names used when setting attribute values in code. So how to know the attribute names?

I stumbled upon a nice msdn page that has the Mappings for the Active Directory Users and Computers Snap-in. It has has links to object type specific UI labels to attribute names.
For example, the User Object User Interface Mapping page shows that the Office UI label maps to physicalDeliveryOfficeName. How could you guess it?

Friday, January 30, 2009

Working with Active Directory using PowerShell

Working with Active Directory is one of the important administrative tasks. VBSctipt was the most used language for administartors to automate repetitive tasks.
Now, windows PowerShell is the future, so it's important to know how to use it to work with Active Directory.
I'll provide a simple example that should clarify some concepts. In this scenario, it's required to set the email attribute of all users under a certain OU (Organaizational Unit) in the format: sAMAccountname@domainname.com and output the results to a text file.
PowerShell 1.0 does not have specific built in Cmdlets to handle active directory objects. But it has a basic support for [ADSI]. This will not limit us as we still can use the .net class library easly in PowerShell.
Here is how the code works:


  • First we decclare a variable that holds the output file path:
    $filePath = "c:\MyFile.txt"

  • Then, create the root directory entry which represents the OU that we need to modify users under it. Note the LDAP: it tells: get the OU named "MyOU" from the domain "win2008.demo"
    $rootOU=[ADSI]"LDAP://ou=MyOU,dc=win2008,dc=demo"

  • We need to get all users under this OU, so we create a .net directory searcher instance using New-Object Cmdlet
    $searcher= New-Object System.DirectoryServices.DirectorySearcher

  • Setting the root of the search to the OU and the filter to find users only. and start to find all objects that match the filter:
    $searcher.searchroot=$rootOU
    $searcher.Filter = "objectclass=user"
    $res=$searcher.FindAll()

  • Initializing the output file by writing the string "Emails"
    "Emails:" Out-File -FilePath $filePath

  • Iterating on the results:
    foreach($u in $res)

  • Getting the user object and setting the mail attribute, and committing:
    $user = $u.GetDirectoryEntry()
    $name=$user.sAMAccountname
    $user.mail="$name@win2008.demo"
    $user.SetInfo()

  • Appending the mail to the output file (note the append parameter):
    $user.mail Out-File -FilePath $filePath -append

You can save these commands to a .ps1 file and execute from PowerShell, for example:
c:\filename.ps1
note that you need to execute Set-ExecutionPolicy RemoteSigned first.

And here is the complete code listing, note that no error checking or handling is included for simplicity.


#Set-ExecutionPolicy RemoteSigned

$filePath = "c:\MyFile.txt"

$rootOU=[ADSI]"LDAP://ou=MyOU,dc=win2008,dc=demo"

$searcher= New-Object System.DirectoryServices.DirectorySearcher

$searcher.searchroot=$rootOU
$searcher.Filter = "objectclass=user"

$res=$searcher.FindAll()

"Emails:" Out-File -FilePath $filePath
foreach($u in $res)
{
$user = $u.GetDirectoryEntry()

$name=$user.sAMAccountname
$user.mail="$name@win2008.demo"
$user.SetInfo()

$user.mail Out-File -FilePath $filePath -append

$user.psbase.Dispose()


}
$rootOU.psbase.Dispose()
$res.Dispose()
$searcher.Dispose()

Thursday, January 1, 2009

Articles I read in 2008

It's a new year, and it's time to share a list of most articles I read the last year. Hopefully you find it interesting and useful.

2007 list can be fond here