Archive for the 'PowerShell' Category

RSS Feed for the 'PowerShell' Category

Three quarters of 2015, my IT career and various ramblings

Monday, October 5th, 2015

September is over. The first three quarters of 2015 are over.
This has been a very important year so far – difficult, but revealing. Everything has been about change, healing and renewal.

We moved back to Europe first, and you might have now also read my other post about leaving Microsoft, more recently.

This was a hard choice – it took many months to reach the conclusion this is what I needed to do.

Most people have gone thru strong programming: they think you have to be 'successful' at something. Success is externally defined, anyhow (as opposed to satisfaction which we define ourselves) and therefore you are supposed to study in college a certain field, then use that at work to build your career in the same field… and keep doing the same thing.

I was never like that – I didn't go to college, I didn't study as an 'engineer'. I just saw there was a market opportunity to find a job when I started, studied on the job, eventually excelled at it. But it never was *the* road. It just was one road; it has served me well so far, but it was just one thing I tried, and it worked out.
How did it start? As a pre-teen, I had been interested in computers, then left that for a while, did 'normal' high school (in Italy at the time, this was really non-technological), then I tried to study sociology for a little bit – I really enjoyed the Cultural Anthropology lessons there, and we were smoking good weed with some folks outside of the university, but I really could not be asked to spend the following 5 or 10 years or my life just studying and 'hanging around' – I wanted money and independence to move out of my parent's house.

So, without much fanfare, I revived my IT knowledge: upgraded my skill from the 'hobbyist' world of the Commodore 64 and Amiga scene (I had been passionate about modems and the BBS world then), looked at the PC world of the time, rode the 'Internet wave' and applied for a simple job at an IT company.

A lot of my friends were either not even searching for a job, with the excuse that there weren't any, or spending time in university, in a time of change, where all the university-level jobs were taken anyway so that would have meant waiting even more after they had finished studying… I am not even sure they realized this until much later.
But I just applied, played my cards, and got my job.

When I went to sign it, they also reminded me they expected hard work at the simplest and humblest level: I would have to fix PC's, printers, help users with networking issues and tasks like those – at a customer of theirs, a big company.
I was ready to roll up my sleeves and help that IT department however I would be capable of, and I did.
It all grew from there.

And that's how my IT career started. I learned all I know of IT on the job and by working my ass off and studying extra hours and watching older/more expert colleagues and making experience.

I am not an engineer.
I am, at most, a mechanic.
I did learn a lot of companies and the market, languages, designs, politics, the human and technical factors in software engineering and the IT marketplace/worlds, over the course of the past 18 years.

But when I started, I was just trying to lend a honest hand, to get paid some money in return – isn't that what work was about?

Over time IT got out of control. Like Venom, in the Marvel comics, that made its appearance as a costume that SpiderMan started wearing… and it slowly took over, as the 'costume' was in reality some sort of alien symbiotic organism (like a pest).

You might be wondering what I mean. From the outside I was a successful Senior Program Manager of a 'hot' Microsoft product.
Someone must have mistaken my diligence and hard work for 'talent' or 'desire of career' – but it never was.
I got pushed up, taught to never turn down 'opportunities'.

But I don't feel this is my path anymore.
That type of work takes too much metal energy off me, and made me neglect myself and my family. Success at the expense of my own health and my family's isn't worth it. Some other people wrote that too – in my case I stopped hopefully earlier.

So what am I doing now?

First and foremost, I am taking time for myself and my family.
I am reading (and writing)
I am cooking again
I have been catching up on sleep – and have dreams again
I am helping my father in law to build a shed in his yard
We bought a 14-years old Volkswagen van that we are turning into a Camper
I have not stopped building guitars – in fact I am getting setup to do it 'seriously' – so I am also standing up a separate site to promote that activity
I am making music and discovering new music and instruments
I am meeting new people and new situations

There's a lot of folks out there who either think I am crazy (they might be right, but I am happy this way), or think this is some sort of lateral move – I am not searching for another IT job, thanks. Stop the noise on LinkedIn please: I don't fit in your algorithms, I just made you believe I did, all these years.

Got Orphaned OpsMgr Objects?

Friday, December 17th, 2010

Have you ever wondered what would happen if, in Operations Manager, you’d delete a Management Server or Gateway that managed objects (such as network devices) or has agents pointing uniquely to it as their primary server?

The answer is simple, but not very pleasant: you get ORPHANED objects, which will linger in the database but you won’t be able to “see” or re-assign anymore from the GUI.

So the first thing I want to share is a query to determine IF you have any of those orphaned agents. Or even if you know, since you are not able to "see" them from the console, you might have to dig their name out of the database. Here's a query I got from a colleague in our reactive support team:

-- Check for orphaned health services (e.g. agent).
declare @DiscoverySourceId uniqueidentifier;
SET @DiscoverySourceId = dbo.fn_DiscoverySourceId_User();
SELECT TME.[TypedManagedEntityid], HS.PrincipalName
FROM MTV_HealthService HS
INNER JOIN dbo.[BaseManagedEntity] BHS WITH(nolock)
ON BHS.[BaseManagedEntityId] = HS.[BaseManagedEntityId]
-- get host managed computer instances
INNER JOIN dbo.[TypedManagedEntity] TME WITH(nolock)
ON TME.[BaseManagedEntityId] = BHS.[TopLevelHostEntityId]
AND TME.[IsDeleted] = 0
INNER JOIN dbo.[DerivedManagedTypes] DMT WITH(nolock)
ON DMT.[DerivedTypeId] = TME.[ManagedTypeId]
INNER JOIN dbo.[ManagedType] BT WITH(nolock)
ON DMT.[BaseTypeId] = BT.[ManagedTypeId]
AND BT.[TypeName] = N'Microsoft.Windows.Computer'
-- only with missing primary
LEFT OUTER JOIN dbo.Relationship HSC WITH(nolock)
ON HSC.[SourceEntityId] = HS.[BaseManagedEntityId]
AND HSC.[RelationshipTypeId] = dbo.fn_RelationshipTypeId_HealthServiceCommunication()
AND HSC.[IsDeleted] = 0
INNER JOIN DiscoverySourceToTypedManagedEntity DSTME WITH(nolock)
ON DSTME.[TypedManagedEntityId] = TME.[TypedManagedEntityId]
AND DSTME.[DiscoverySourceId] = @DiscoverySourceId
WHERE HS.[IsAgent] = 1
AND HSC.[RelationshipId] IS NULL;

Once you have identified the agent you need to re-assign to a new management server, this is doable from the SDK. Below is a powershell script I wrote which will re-assign it to the RMS. It has to run from within the OpsMgr Command Shell.
You still need to change the logic which chooses which agent – this is meant as a starting base… you could easily expand it into accepting parameters and/or consuming an input text file, or using a different Management Server than the RMS… you get the point.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$ –like "*Microsoft.SystemCenter.HealthServiceCommunication*"}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“HealthService”)
  6. $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  7. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  8. {
  9.     #the next line should be changed to pick the right agent to re-assign
  10.     if ($obj.DisplayName -match 'dsxlab')
  11.     {
  12.                 Write-host $obj.displayname
  13.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  14.                 $cmro.SetSource($obj)
  15.                 $cmro.SetTarget($rms)
  16.                 $imdd.Add($cmro)
  17.                 $imdd.Commit($mc)
  18.     }
  19. }

Similarly, you might get orphaned network devices. The script below is used to re-assign all Network Devices to the RMS. This script is actually something I have had even before the other one (yes, it has been sitting in my "digital drawer" for a couple of years or more…) and uses the same concept – only you might notice that the relation's source and target are "reversed", since the relationships are different:

  • the Management Server (source) "manages" the Network Device (target)
  • the Agent (source) "talks" to the Management Server (target)

With a bit of added logic it should be easy to have it work for specific devices.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$ –like "*Microsoft.SystemCenter.HealthServiceShouldManageEntity*"}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“NetworkDevice”)
  6. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  7. {
  8.                 Write-host $obj.displayname
  9.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  10.                 $cmro.SetSource($rms)
  11.                 $cmro.SetTarget($obj)
  12.                 $imdd.Add($cmro)
  13.                 $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  14.                 $imdd.Commit($mc)
  15. }


The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Inversely Proportional

Wednesday, November 17th, 2010

Inversely Proportional

Some time ago I was reading…

[…] Since a good portion of the C# books are between the 500 and 1000 page range, it was refreshing to read a book that was less than 200 pages. Partly this is because when the book was published the surface area of the reusable API was a small fraction of what it is now. However, I also wonder if there was an expectation of disciplined conciseness in technical writing back in the late 80’s that simply no longer exists today. […]

I think this is a very important point. But then, again, it was no secret – this was written in the Preface to the first edition of that book:

[…] is not a "very high level" language, nor a "big" one, and is not specialized to any particular area of application. But its absence of resrictions and its generality make it more convenient and effective for many tasks than supposedly more powerful languages. […]

I think it all boils down to simplicity, as Glenn Scott says in

[…] To master this technique you need to adopt this mindset that your product is, say, simple and clean, and you just know this, and you are confident and assured of this. There is no urgent need to “prove” anything. […]

Another similar book on a (different) programming language, is "Programming Ruby, the pragmatic programmer's guide" which starts with

[…] This book is a tutorial and reference for the Ruby programming language. Use Ruby, and you'll write better code, be more productive, and enjoy programming more. […] As Pragmatic Programmers we've tried many, many languages in our search for tools to make our lives easier, for tools to help us do our jobs better. Until now, though, we'd always been frustrated by the languages we were using. […]

Of course that language is simple and sweet, very expressive, and programmers are seen as having to be "pragmatic". No nonsensical, incredibly complex cathedrals (in the language itself and in the documentation) – but quick and dirty things that just WORK.

But way too often, the size of a book is considered a measure for its quality and depth.
I recently read on Twitter about an upcoming "Programming Windows Phone 7" book that would be more than a thousand pages in size:!/MicrosoftPress/status/27374650771

I mean: I do understand that there are many API's to take a look at and the book wants to be comprehensive…but…. do they really think that the sheer *size* of a book (>1000 pages) is an advantage in itself? it might actually scare people away, for how I see things. But it must be me.

In the meantime the book has been released and can be dowloaded from here…

I have not looked at it yet – when I will have time to take a look at it I'll be able to judge better…

for now I only incidentally noticed that a quick search for books about programming the iPhone/iPad returns books that are between 250 and 500 pages maximum…

And yet simplicity CAN be known to us, and some teams really "Get it": take Powershell, for example – it is a refreshing example of this: the official powershell blog has a subtitle of "changing the world, one line at the time" – that's a strong statement… but in line with the empowerment that simplicity enables. In fact, Bruce Payette's book "Powershell in Action" is also not huge.
I suppose it must be a coincidence. Or maybe not.

OpsMgr Eventlog analysis with Powershell

Wednesday, December 16th, 2009

The following technique should already be understood by any powersheller. Here we focus on Operations Manager log entries, even if the data mining technique shows is entirely possibly – and encouraged :-) – with any other event log.

Let’s start by getting our eventlog into a variable called $evt:

PS  >> $evt = Get-Eventlog “Operations Manager”

The above only works locally in POSH v1.

In POSH v2 you can go remotely by using the “-computername” parameter:

PS  >> $evt = Get-Eventlog “Operations Manager” –computername

Anyhow, you can get to this remotely also in POSHv1 with this other more “dotNET-tish” syntax:

PS >> $evt = (New-Object System.Diagnostics.Eventlog -ArgumentList "Operations Manager").get_Entries()

you could even export this (or any of the above) to a CLIXML file:

PS >> (New-Object System.Diagnostics.Eventlog -ArgumentList "Operations Manager").get_Entries() | export-clixml -path c:\evt\Evt-OpsMgr-RMS.MYDOMAIN.COM.xml

and then you could reload your eventlog to another machine:

PS  >> $evt = import-clixml c:\evt\Evt-OpsMgr-RMS.MYDOMAIN.COM.xml

whatever way you used to populate your $evt  variable, be it from a “live” eventlog or by re-importing it from XML, you can then start analyzing it:

PS  >> $evt | where {$_.Entrytype -match "Error"} | select EventId,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
1510 4509                      {@{EventID=4509; Source=HealthService; Message=The constructor for the managed module type "Microsoft.EnterpriseManagement.Mom.DatabaseQueryModules.GroupCalculatio.
   15 20022                     {@{EventID=20022; Source=OpsMgr Connector; Message=The health service {7B0E947B-2055…
    3 26319                     {@{EventID=26319; Source=OpsMgr SDK Service; Message=An exception was thrown while p…
    1 4512                      {@{EventID=4512; Source=HealthService; Message=Converting data batch to XML failed w…

the above is functionally identical to the following:

PS  >> $evt | where {$_.Entrytype -eq 1} | select EventID,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
1510 4509                      {@{EventID=4509; Source=HealthService; Message=The constructor for the managed modul…
   15 20022                     {@{EventID=20022; Source=OpsMgr Connector; Message=The health service {7B0E947B-2055…
    3 26319                     {@{EventID=26319; Source=OpsMgr SDK Service; Message=An exception was thrown while p…
    1 4512                      {@{EventID=4512; Source=HealthService; Message=Converting data batch to XML failed w…

Note that Eventlog Entries’ type is an ENUM that has values of 0,1,2 – similarly to OpsMgr health states – but beware that their order is not the same, as shown in the following table:

Code OpsMgr States Events EntryType
0 Not Monitored Information
1 Success Error
2 Warning Warning
3 Critical

Let’s now look at Information Events (Entrytype –eq 0)

PS  >> $evt | where {$_.Entrytype -eq 0} | select EventID,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
4135 2110                      {@{EventID=2110; Source=HealthService; Message=Health Service successfully transferr…
1548 21025                     {@{EventID=21025; Source=OpsMgr Connector; Message=OpsMgr has received new configura…
4644 7026                      {@{EventID=7026; Source=HealthService; Message=The Health Service successfully logge…
1548 7023                      {@{EventID=7023; Source=HealthService; Message=The Health Service has downloaded sec…
1548 7025                      {@{EventID=7025; Source=HealthService; Message=The Health Service has authorized all…
1548 7024                      {@{EventID=7024; Source=HealthService; Message=The Health Service successfully logge…
1548 7028                      {@{EventID=7028; Source=HealthService; Message=All RunAs accounts for management gro…
   16 20021                     {@{EventID=20021; Source=OpsMgr Connector; Message=The health service {7B0E947B-2055…
   13 7019                      {@{EventID=7019; Source=HealthService; Message=The Health Service has validated all …
    4 4002                      {@{EventID=4002; Source=Health Service Script; Message=Microsoft.Windows.Server.Logi…


And “Warning” events (Entrytype –eq 2):

PS  >> $evt | where {$_.Entrytype -eq 2} | select EventID,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
1511 1103                      {@{EventID=1103; Source=HealthService; Message=Summary: 1 rule(s)/monitor(s) failed …
  501 20058                     {@{EventID=20058; Source=OpsMgr Connector; Message=The Root Connector has received b…
    5 29202                     {@{EventID=29202; Source=OpsMgr Config Service; Message=OpsMgr Config Service could …
  421 31501                     {@{EventID=31501; Source=Health Service Modules; Message=No primary recipients were …
   18 10103                     {@{EventID=10103; Source=Health Service Modules; Message=In PerfDataSource, could no…
    1 29105                     {@{EventID=29105; Source=OpsMgr Config Service; Message=The request for management p…



Ok now let’s see those event 20022, for example… so we get an idea of which healthservices they are referring to (20022 indicates" “hearthbeat failure”, btw):

PS  >> $evt | where {$_.eventid -eq 20022} | select message

The health service {7B0E947B-2055-C12A-B6DB-DD6B311ADF39} running on host and s…
The health service {E3B3CCAA-E797-4F08-860F-47558B3DA477} running on host and serving…
The health service {E3B3CCAA-E797-4F08-860F-47558B3DA477} running on host and serving…
The health service {E3B3CCAA-E797-4F08-860F-47558B3DA477} running on host and serving…
The health service {52E16F9C-EB1A-9FAF-5B9C-1AA9C8BC28E3} running on host and se…
The health service {F96CC9E6-2EC4-7E63-EE5A-FF9286031C50} running on host and s…
The health service {71987EE0-909A-8465-C32D-05F315C301CC} running on host….
The health service {BAF6716E-54A7-DF68-ABCB-B1101EDB2506} running on host and serving mana…
The health service {30C81387-D5E0-32D6-C3A3-C649F1CF66F1} running on host and…
The health service {3DCDD330-BBBB-B8E8-4FED-EF163B27DE0A} running on host and s…
The health service {13A47552-2693-E774-4F87-87DF68B2F0C0} running on host and …
The health service {920BF9A8-C315-3064-A5AA-A92AA270529C} running on host FSCLU2 and serving management group Pr…
The health service {FAA3C2B5-C162-C742-786F-F3F8DC8CAC2F} running on host and s…
The health service {3DCDD330-BBBB-B8E8-4FED-EF163B27DE0A} running on host and s…
The health service {3DCDD330-BBBB-B8E8-4FED-EF163B27DE0A} running on host and s…


or let’s look at some warning for the Config Service:

PS  >> $evt | where {$_.Eventid -eq 29202}

   Index Time          EntryType   Source                 InstanceID Message
   —– —-          ———   ——                 ———- ——-
5535065 Dec 07 21:18  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5543960 Dec 09 16:39  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5545536 Dec 10 01:06  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5553119 Dec 11 08:24  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5555677 Dec 11 10:34  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…

Once seen those, can you remember of any particular load you had on those days that justifies the instance space changing so quickly that the Config Service couldn’t keep up?


Or let’s group those events with ID 21025 by hour, so we know how many Config recalculations we’ve had (which, if many, might indicate Config Churn):

PS  >> $evt | where {$_.Eventid -eq 21025} | select TimeGenerated | % {$_.TimeGenerated.ToShortDateString()} | group

Count Name                      Group
—– —-                      —–
   39 12/7/2009                 {12/7/2009, 12/7/2009, 12/7/2009, 12/7/2009…}
  203 12/8/2009                 {12/8/2009, 12/8/2009, 12/8/2009, 12/8/2009…}
  217 12/9/2009                 {12/9/2009, 12/9/2009, 12/9/2009, 12/9/2009…}
  278 12/10/2009                {12/10/2009, 12/10/2009, 12/10/2009, 12/10/2009…}
  259 12/11/2009                {12/11/2009, 12/11/2009, 12/11/2009, 12/11/2009…}
  224 12/12/2009                {12/12/2009, 12/12/2009, 12/12/2009, 12/12/2009…}
  237 12/13/2009                {12/13/2009, 12/13/2009, 12/13/2009, 12/13/2009…}
   91 12/14/2009                {12/14/2009, 12/14/2009, 12/14/2009, 12/14/2009…}


Event ID 21025 shows that there is a new configuration for the Management Group.

Event ID 29103 has a similar wording, but shows that there is a new configuration for a given Healthservice. These should normally be many more events, unless your only health Service is the RMS, which is unlikely…

If we look at the event description (“message”) in search for the name (or even the GUID, as both are present) or our RMS, as follows, then they should be the same numbers of the 21025 above:

PS  >> $evt | where {$_.Eventid -eq 29103} | where {$_.message -match ""} | select TimeGenerated | % {$_.TimeGenerated.ToShortDateString()} | group

Count Name                      Group
—– —-                      —–
   39 12/7/2009                 {12/7/2009, 12/7/2009, 12/7/2009, 12/7/2009…}
  203 12/8/2009                 {12/8/2009, 12/8/2009, 12/8/2009, 12/8/2009…}
  217 12/9/2009                 {12/9/2009, 12/9/2009, 12/9/2009, 12/9/2009…}
  278 12/10/2009                {12/10/2009, 12/10/2009, 12/10/2009, 12/10/2009…}
  259 12/11/2009                {12/11/2009, 12/11/2009, 12/11/2009, 12/11/2009…}
  224 12/12/2009                {12/12/2009, 12/12/2009, 12/12/2009, 12/12/2009…}
  237 12/13/2009                {12/13/2009, 12/13/2009, 12/13/2009, 12/13/2009…}
   91 12/14/2009                {12/14/2009, 12/14/2009, 12/14/2009, 12/14/2009…}


Going back to the initial counts of events by their IDs, when showing the errors the counts above had spotted the presence of a lonely 4512 event, which might have gone undetected if just browsing the eventlog with the GUI, since it only occurred once.

Let’s take a look at it:

PS  >> $evt | where {$_.eventid -eq 4512}

   Index Time          EntryType   Source                 InstanceID Message
   —– —-          ———   ——                 ———- ——-
5560756 Dec 12 11:18  Error       HealthService          3221229984 Converting data batch to XML failed with error …

Now, when it is about counts, Powershell is great.  But sometimes Powershell makes it difficult to actually READ the (long) event messages (descriptions) in the console. For example, our event ID 4512 is difficult to read in its entirety and gets truncated with trailing dots…

we can of course increase the window size and/or selecting only THAT one field to read it better:

PS  >> $evt | where {$_.eventid -eq 4512} | select message

Converting data batch to XML failed with error "Not enough storage is available to complete this operation." (0x8007000E) in rule "Microsoft.SystemCenter.ConfigurationService.CollectionRule.Event.ConfigurationChanged" running for instance "RMS.MYDOMAIN.COM" with id:"{04F4ADED-2C7F-92EF-D620-9AF9685F736F}" in management group "SCOMPROD"

Or, worst case, if it still does not fit, we can still go and search for it in the actual, usual eventlog application… but at least we will have spotted it!


The above wants to give you an idea of what is easily accomplished with some simple one-liners, and how it can be a useful aid in analyzing/digging into Eventlogs.

All of the above is ALSO be possible with Logparser, and it would actually be even less heavy on memory usage and it will be quicker, to be honest!

I just like Powershell syntax a lot more, and its ubiquity, which makes it a better option for me. Your mileage may vary, of course.

PS> Get-Milk

Thursday, September 17th, 2009

PS> Get-Milk

I printed a tshirt for Sara with a baby-friendly Powershell cmdlet ("Get-Milk").
She already seems to be wondering what script she can write with it.

PS> Get-Milk

PS> Get-Milk

The mystery of the lost registry values

Thursday, September 10th, 2009

During the OpsMgr Health Check engagement we use custom code to assess the customer’s Management group, as I wrote here already. Given that the customer tells us which machine is the RMS, one of the very first things that we do in our tool is to connect to the RMS’s registry, and check the values under HKLM\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup to see which machine holds the database. It is a rather critical piece of information for us, as we run a number of queries afterward… so we need to know where the db is, obviously :-)

I learned from here how to access registry remotely thru powershell, by using .Net classes. This is also one of the methods illustrated in this other article on Technet Script Center 

Therefore the “core” instructions of the function I was using to access the registry looked like the following

  1. Function GetValueFromRegistry ([string]$computername, $regkey, $value)   
  2. {  
  3.      $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $computername)  
  4.      $regKey= $reg.OpenSubKey("$regKey")  
  5.      $result = $regkey.GetValue("$value")  
  6.      return $result 
  7. }  


[Note: the actual function is bigger, and contains error handling, and logging, and a number of other things that are unnecessary here]

Therefore, the function was called as follows:
GetValueFromRegistry $RMS "SOFTWARE\\Microsoft\\Microsoft Operations Manager\\3.0\\Setup" "DatabaseServerName"
Now so far so good.

In theory.


Now for some reason that I could not immediately explain, we had noticed that this piece of code performing registry accessm while working most of the times, only on SOME occasions was giving errors about not being able to open the registry value…


When you are onsite with a customer conducting an assessment, the PFE engineer does not always has the time to troubleshoot the error… as time is critical, we have usually resorted to just running the assessment from ANOTHER machine, and this “solved” the issue… but always left me wondering WHY this was giving an error. I had suspected an issue with permissions first, but it could not be as the permissions were obviously right: performing the assessment from another machine but with the same user was working!

A few days ago my colleague and buddy Stefan Stranger figured out that this was related to the platform architecture:

  • X64 client to x64 RMS was working
  • X64 client to x86 RMS was working
  • X86 client to x86 RMS was working
  • X86 client to x64 RMS was NOT working

You don’t need to use our custom code to reproduce this, REGEDIT shows the behavior as well.

If, from a 64-bit server, you open a remote registry connection to 64-bit RMS server, you can see all OpsMgr registry keys:


If, anyhow, from a 32-bit server, you open a remote registry connection to 64-bit RMS server, you don’t see ALL – but only SOME – OpsMgr registry keys:

So here’s the reason! This is what was happening! How could I not think of this before? It was nothing related to permissions, but to registry redirection! The issue was happening because the 32 bit machine is using the 32bit registry editor and what it will do when accessing a 64bit machine will be to default to the Wow6432Node location in the registry. There all OpsMgr data won’t be in the WOW64 location on a 64bit machine, only some.

So, just like regedit, the 32bit powershell and the 32bit .Net framework were being redirected to the 32bit-compatibility registry keys… not finding the stuff we needed, whereas a 64bit application could find that. Any 32bit application by default gets redirected to a 32bit-safe registry.

So, after finally UNDERSTANDING what the issue was, I started wondering: ok… but how can I access the REAL “HLKM\SOFTWARE\Microsoft” key on a 64bit machine when running this FROM a 32bit machine – WITHOUT being redirected to “HKLM\SOFTWARE\Wow6432Node\Microsoft” ? What if my application CAN deal just fine with those values and actually NEEDs to access them?

The answer wasn’t as easy as the question. I did a bit of digging on this, and still I have NOT yet found a way to do this with the .Net classes. It seems that in a lot of situations, Powershell or even .Net classes are nice and sweet wrappers on the underlying Windows APIs… but for how sweet and easy they are, they are very often not very complete wrappers – letting you do just about enough for most situations, but not quite everything you would or could with the APi underneath. But I digress, here…

The good news is that I did manage to get this working, but I had to resort to using dear old WMI StdRegProvider… There are a number of locations on the Internet mentioning the issue of accessing 32bit registry from 64bit machines or vice versa, but all examples I have found were using VBScript. But I needed it in Powershell. Therefore I started with the VBScript example code that is present here, and I ported it to Powershell.

Handling the WMI COM object from Powershell was slightly less intuitive than in VBScript, and it took me a couple of hours to figure out how to change some stuff, especially this bit that sets the parameters collection:

Set Inparams = objStdRegProv.Methods_("GetStringValue").Inparameters

Inparams.Hdefkey = HKLM

Inparams.Ssubkeyname = RegKey

Inparams.Svaluename = RegValue

Set Outparams = objStdRegProv.ExecMethod_("GetStringValue", Inparams,,objCtx)

INTO this:

$Inparams = ($objStdRegProv.Methods_ | where {$ -eq "GetStringValue"}).InParameters.SpawnInstance_()

($Inparams.Properties_ | where {$ -eq "Hdefkey"}).Value = $HKLM

($Inparams.Properties_ | where {$ -eq "Ssubkeyname"}).Value = $regkey

($Inparams.Properties_ | where {$ -eq "Svaluename"}).Value = $value

$Outparams = $objStdRegProv.ExecMethod_("GetStringValue", $Inparams, "", $objNamedValueSet)


I have only done limited testing at this point and, even if the actual work now requires nearly 15 lines of code to be performed vs. the previous 3 lines in the .Net implementation, it at least seems to work just fine.

What follows is the complete code of my replacement function, in all its uglyness glory:


  1. Function GetValueFromRegistryThruWMI([string]$computername, $regkey, $value)  
  2. {  
  3.     #constant for the HLKM  
  4.     $HKLM = "&h80000002" 
  6.     #creates an SwbemNamedValueSet object
  7.     $objNamedValueSet = New-Object -COM "WbemScripting.SWbemNamedValueSet" 
  9.     #adds the actual value that will requests the target to provide 64bit-registry info
  10.     $objNamedValueSet.Add("__ProviderArchitecture", 64) | Out-Null 
  12.     #back to all the other usual COM objects for WMI that you have used a zillion times in VBScript
  13.     $objLocator = New-Object -COM "Wbemscripting.SWbemLocator" 
  14.     $objServices = $objLocator.ConnectServer($computername,"root\default","","","","","",$objNamedValueSet)  
  15.     $objStdRegProv = $objServices.Get("StdRegProv")  
  17.     # Obtain an InParameters object specific to the method.  
  18.     $Inparams = ($objStdRegProv.Methods_ | where {$ -eq "GetStringValue"}).InParameters.SpawnInstance_()  
  20.     # Add the input parameters  
  21.     ($Inparams.Properties_ | where {$ -eq "Hdefkey"}).Value = $HKLM 
  22.     ($Inparams.Properties_ | where {$ -eq "Ssubkeyname"}).Value = $regkey 
  23.     ($Inparams.Properties_ | where {$ -eq "Svaluename"}).Value = $value 
  25.     #Execute the method  
  26.     $Outparams = $objStdRegProv.ExecMethod_("GetStringValue", $Inparams, "", $objNamedValueSet)  
  28.     #shows the return value  
  29.     ($Outparams.Properties_ | where {$ -eq "ReturnValue"}).Value  
  31.     if (($Outparams.Properties_ | where {$ -eq "ReturnValue"}).Value -eq 0)  
  32.     {  
  33.        write-host "it worked" 
  34.        $result = ($Outparams.Properties_ | where {$ -eq "sValue"}).Value  
  35.        write-host "Result: $result" 
  36.        return $result 
  37.     }  
  38.     else 
  39.     {  
  40.         write-host "nope" 
  41.     }  
  42. }  


which can be called similarly to the previous one:
GetValueFromRegistryThruWMI $RMS "SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup" "DatabaseServerName"

[Note: you don’t need the double\escape backslashes here, compared to the .Net implementation]

Enjoy your cross-architecture registry access: from 32bit to 64bit – and back!

Using the SCX Agent with WSMan from Powershell v2

Monday, June 1st, 2009

So Powershell v2 adds a nice bunch of Ws-Man related cmdlets. Let’s see how we can use them to interact with OpenPegasus’s WSMan on a SCX Agent.

PS C:\maint> test-wsman -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:


But we do get this error:

Test-WSMan : The server certificate on the destination computer (virtubuntu.huis.dom:1270) has the following errors:
The SSL certificate could not be checked for revocation. The server used to check for revocation might be unreachable.

The SSL certificate is signed by an unknown certificate authority.
At line:1 char:11
+ test-wsman <<<<  -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl
+ CategoryInfo          : InvalidOperation: (:) [Test-WSMan], InvalidOperationException
+ FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand

The credentials above have to be a unix login. Which we typed correctly. But we still can't get thru, as the certificate used by the agent is not trusted by our workstation. This seems to be the “usual” issue I first faced when testing SCX with WINRM in beta1. At the time I simply dismissed it with the following sentence

[…] Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM. […]

and I sincerely thought that it would explain pretty well… but eventually a lot of people got confused by this and did not know what to do, especially for the part that goes about trusting the certificate.  Anyway, in the following posts I figured out you could pass the –skipCACheck parameter to WINRM… which solved the issue with having to trust the certificate (which is fine for testing, but I would not use that for automations and scripts running in production… as it might expose your credentials to man-in-the-middle attacks).

So it seems that with the Powershell cmdlets we are back to that issue, as I can’t find a parameter to skip the CA check. Maybe it is there, but with PSv2 not having been released yet, I don't know everything about it, and the CTP documentation is not yet complete. Therefore, back to trusting the certificate.

Trusting the certificate is actually very simple, but it can be a bit tricky when passing those certs back and forth from unix to windows. So let's make the process a bit clearer.

All of the SCX-agents certificates are ultimately signed by a key on the Management server that has discovered them, but I don't currently know where that certificate/key is stored on the management server. Anyway, you can get it from the agent certificate – as you only really need the public key, not the private signing key.

Use WinSCP or any other utility to copy the certificate off one of the agents.
You can find that in the /etc/opt/microsoft/scx/ssl location:


that scx-host-computername.pem is your agent certificate.

Copy it to the Management server and change its extension from .pem to .cer. Now Windows will be happy to show it to you with the usual Certificate interface:


We need to go to the “Certification Path” tab, select the ISSUER certificate (the one called “SCX-Certificate”):


then go to the “Details” tab, and use the “Copy to File” button to export the certificate.

After you have the certificate in a .CER file, you can add it to the “trusted root certification authorities” store on the computer you are running your powershell tests from.


So after you have trusted it, the same command as above actually works now:

PS C:\maint> test-wsman -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:

wsmid           :
lang            :
ProtocolVersion :
ProductVendor   : Microsoft System Center Cross Platform
ProductVersion  : 1.0.4-248

Ok, we can talk to it! Now we can do something funnier, like actually returning instances and/or calling methods:

PS C:\maint> Get-WSManInstance -computer virtubuntu.huis.dom -authentication basic -credential (get-credential) -port 1270 -usessl -enumerate


This is far from exhaustive, but should get you started on a world of possibilities about automating diagnostics and responses with Powershell v2 towards the OpsMgr 2007 R2 Cross-Platform machines. Enjoy!


The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Get-WmiCustom (aka: Get-WMIObject with timeout!)

Wednesday, May 27th, 2009

I make heavy use of WMI.

But when using it to gather information from customer’s machines for assessments, I sometimes find the occasional broken WMI repository. There are a number of ways in which WMI can become corrupted and return weird results. Most of the times you would just get errors, such as “Class not registered” or “provider load failure”. I can handle those errors from within scripts.

But there are some, more subtle – and annoying – ways in which the WMI repository can get corrupted. the situations I am talking about are the ones when WMI will accept your query… will say it is executing it… but it will never actually return any error, and just stay stuck performing your query forever. Until your client application decides to time out. Which in some cases does not happen.

Now that was my issue – when my assessment script (which was using the handy Powershell Get-WmiObject cmdlet) would hit one of those machines… the whole script would hang forever and never finish its job. Ok, sure, the solution to this would be actually FIXING the WMI repository and then try again. But remember I am talking of an assessment: if the information I am getting is just one piece of a bigger puzzle, and I don’t necessarily care about it and can continue without that information – I want to be able to do it, to skip that info, maybe the whole section, report an error saying I am not able to get that information, and continue to get the remaining info. I can still fix the issue on the machine afterward AND then run the assessment script again, but in the first place I just want to get a picture of how the system looks like. With the good and with the bad things. Especially, I do want to take that whole picture – not just a piece of it.

Unfortunately, the Get-WmiObject cmdlet does not let you specify a timeout. Therefore I cooked my own function which has a compatible behaviour to that of Get-WmiObject, but with an added “-timeout” parameter which can be set. I dubbed it “Get-WmiCustom”

Function Get-WmiCustom([string]$computername,[string]$namespace,[string]$class,[int]$timeout=15)
$ConnectionOptions = new-object System.Management.ConnectionOptions
$EnumerationOptions = new-object System.Management.EnumerationOptions

$timeoutseconds = new-timespan -seconds $timeout

$assembledpath = "\\" + $computername + "\" + $namespace
#write-host $assembledpath -foregroundcolor yellow

$Scope = new-object System.Management.ManagementScope $assembledpath, $ConnectionOptions

$querystring = "SELECT * FROM " + $class
#write-host $querystring

$query = new-object System.Management.ObjectQuery $querystring
$searcher = new-object System.Management.ManagementObjectSearcher
$searcher.Query = $querystring
$searcher.Scope = $Scope

trap { $_ } $result = $searcher.get()

return $result

You can call it as follows, which is similar to how you would call get-WmiObject

get-wmicustom -class Win32_Service -namespace "root\cimv2" -computername server1.domain.dom

or, of course, specifying the timeout (in seconds):

get-wmicustom -class Win32_Service -namespace "root\cimv2" -computername server1.domain.dom –timeout 1

and obviously, since the function returns objects just like the original cmdlet, it is also possible to pipe them to other commands:

get-wmicustom -class Win32_Service -namespace "root\cimv2" -computername server1.domain.dom –timeout 1 | Format-Table

Early Adoptions, Health Checks and New Year Rants.

Tuesday, December 30th, 2008


Two days ago I read the following Tweet by Hugh MacLeod:

"[…] Early Adopter Problem: How to differentiate from the bandwagon, once the bandwagon starts moving faster than you are […]"

That makes me think of early adoption of a few technologies I have been working with, and how the community around those evolved. For example:

Operations Manager… early adoption meant that I have been working with it since the beta, had posted one of the earliest posts about how to use a script in a Unit Monitor back in may 2007 (the product was released in April 2007 and there was NO documentation back then, so we had to really try to figure out everything…), but someone seems to think it is worth repeating the very same lesson in November 2008, with not a lot of changes, as I wrote here. I don't mean being rude to Anders… repeating things will surely help the late adopters finding the information they need, of course.

Also, I started playing early with Powershell. I posted my first (and only) cmdlet back in 2006. It was not a lot more than a test for myself to learn how to write one, but that's just to say that I started playing early with it. I have been using it to automate tasks for example.

Going back to the quote above, everyone gets on the bandwagon posting examples and articles. I had been asked a few times about writing articles on OpsMgr and Powershell usage (for example by but I declined, as I was too busy using this knowledge to do stuff for work (where “work” is defined as in “work that pays your mortgage”), rather than seeking personal prestige through articles and blogs. Anyway, that kind of articles are appearing now all over the Internet and the blogosphere now. The above examples made me think of early adoption, and the bandwagon that follows later on… but even as an early adopter, I was never very noisy or visible.

Now, going back to what I do for work, (which I mentioned here and here in the past), I work in the Premier Field Engineering organization of Microsoft Services, which provides Premier services to customers. Microsoft Premier customer have a wide range of Premier agreement features and components that they can use to support their people, improve their processes, and improve the productive use of the Microsoft technology they have purchased. Some of these services we provide are known to the world as “Health Checks”, some as “Risk Assessment Programs” (or, shortly, RAPs). These are basically services where one of our technology experts goes on the customer site and there he uses a custom, private Microsoft tool to gather a huge amount of data from the product we mean to look at (be it SQL, Exchange, AD or anything else….). The Health Check or RAP tool collects the data and outputs a draft of the report that will be delivered to the customer later on, with all the right sections and chapters. This is done so that every report of the same kind will look consistent, even if the engagement is performed by a different engineer in a different part of the world. The engineer will of course analyze the collected data and write recommendations about what is configured properly and/or about what could or should be changed and/or improved in the implementation to make it adhere to Best Practices. To make sure only the right people actually go onsite to do this job we have a strict internal accreditation process that must be followed; only accredited resources that know the product well enough and know exactly how to interpret the data that the tool collects are allowed to use it and to deliver the engagement, and present/write the findings to the customer.

So why am I telling you this here, and how have I been using my early knowledge of OpsMgr and Powershell for ?

I have used that to write the Operations Manager Health Check, of course!

We had a MOM 2005 Health Check already, but since the technology has changed so much, from MOM to OpsMgr, we had to write a completely new tool. Jeff  (the original MOM2005 author, who does not have a blog that I can link to) and me are the main coders of this tool… and the tool itself is A POWERSHELL script. A longish one, of course (7000 lines, more or less), but nothing more than a Powershell script, at the end of the day. There are a few more colleagues that helped shape the features and tested the tool, including Kevin Holman. Some of the database queries on Kevin’s blog are in fact what we use to extract some of the data (beware that some of those queries have recently been updated, in case you saved them and using your local copy!), while some other information are using internal and/or custom queries. Some other times we use OpsMgr cmdlets or go to the SDK service, but a lot of times we query the database directly (we really should use the SDK all the times, but for certain stuff direct database access is way faster). It took most of the past year to write it, test it, troubleshoot it, fix it, and deliver the first engagements as “beta” to some customers to help iron out the process… and now the delivery is available! If a year seems like a long time, you have to consider this is all work that gets done next to what we all have to normally do with customers, not replacing it (i.e. I am not free to sit on my butt all day and just write the tool… I still have to deliver services to customers day in day out, in the meantime).

Occasionally, during this past calendar year, that is approaching its end, I have been willing and have found some extra time to disclose some bits and pieces, techniques and prototypes of how to use Powershell and OpsMgr together, such as innovative ways to use Powershell in OpsMgr against beta features, but in general most of my early adopter’s investment went into the private tool for this engagement, and that is one of the reasons I couldn’t blog or write much about it, being it Microsoft Intellectual Property.

But it is also true that I did not care to write other stuff when I considered it too easy or it could be found in the documentation. I like writing of ideas, thoughts, rants OR things that I discover and that are not well documented at the time I study them… so when I figure out things I might like leaving a trail for some to follow. But I am not here to spoon feed people like some in the bandwagon are doing. Now the bandwagon is busy blogging and writing continuously about some aspect of OpsMgr (known or unknown, documented or not), and the answer to the original question of Hugh is, in my opinion, that it does not really matter what the bandwagon is doing right now. I was never here to do the same thing. I think that is my differentiator. I am not saying that what a bunch of colleagues and enthusiasts is doing is not useful: blogging and writing about various things they experiment with is interesting and it will be useful to people. But blogs are useful until a certain limit. I think that blogs are best suited for conversations and thoughts (rather than for "howto's"), and what I would love to see instead is: less marketing hype when new versions are announced and more real, official documentation.

But I think I should stop caring about what the bandwagon is doing, because that's just another ego trip at the end of the day. What I should more sensibly do, would be listening to my horoscope instead:

[…] "How do you slay the dragon?" journalist Bill Moyers asked mythologist Joseph Campbell in an interview. By "dragon," he was referring to the dangerous beast that symbolizes the most unripe and uncontrollable part of each of our lives. In reply to Moyers, Campbell didn't suggest that you become a master warrior, nor did he recommend that you cultivate high levels of sleek, savage anger. "Follow your bliss," he said simply. Personally, I don't know if that's enough to slay the dragon — I'm inclined to believe that you also have to take some defensive measures — but it's definitely worth an extended experiment. Would you consider trying that in 2009? […]

Programmatically Check for Management Pack updates in OpsMgr 2007 R2

Saturday, November 29th, 2008

One of the cool new features of System Center Operations Manager 2007 R2 is the possibility to check and update Management Packs from the catalog on the Internet directly from the Operators Console:

Select Management Packs from Catalog

Even if the backend for this feature is not yet documented, I was extremely curious to see how this had actually been implemented. Especially since it took a while to have this feature available for OpsMgr, I had the suspicion that it could not be as simple as one downloadable XML file, like the old MOM2005's MPNotifier had been using in the past.

Therefore I observed the console's traffic through the lens of my proxy, and got my answer:

ISA Server Log

So that was it: a .Net Web Service.

I tried to ask the web service itself for discovery information, but failed:


Since there is no WSDL available, but I badly wanted to interact with it, I had to figure out: what kind of requests would be allowed to it, how should they be written, what methods could they call and what parameters should I pass in the call. In order to get started on this, I thought I could just observe its network traffic. And so I did… I fired up Network Monitor and captured the traffic:

Microsoft Network Monitor 3.2

Microsoft Network Monitor is beautiful and useful for this kind of stuff, as it lets you easily identify which application a given stream of traffic belongs to, just like in the picture above. After I had isolated just the traffic from the Operations Console, I then saved those captures packets in CAP format and opened it again in Wireshark for a different kind of analysis – "Follow TCP Stream":

Wireshark: Follow TCP Stream

This showed me the reassembled conversation, and what kind of request was actually done to the Web Service. That was the information I needed.

Ready to rock at this point, I came up with this Powershell script (to be run in OpsMgr Command Shell) that will:

1) connect to the web service and retrieve the complete MP list for R2 (this part is also useful on its own, as it shows how to interact with a SOAP web service in Powershell, invoking a method of the web service by issuing a specially crafted POST request. To give due credit, for this part I first looked at this PERL code, which I then adapted and ported to Powershell);

2) loop through the results of the "Get-ManagementPack" opsmgr cmdlet and compare each MP found in the Management Group with those pulled from the catalog;

3) display a table of all imported MPs with both the version imported in your Management Group AND the version available on the catalog:

Script output in OpsMgr Command Shell

Remember that this is just SAMPLE code, it is not meant to be used in production environment and it is worth mentioning again that OpsMgr2007 R2 this is BETA software at the time of writing, therefore this functionality (and its implementation) might change at any time, and the script will break. Also, at present, the MP Catalog web service still returns slightly older MP versions and it is not yet kept in sync and updated with MP Releases, but it will be ready and with complete/updated content by the time R2 gets released.


The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Conversation about Blogs with a customer

Friday, March 28th, 2008

I usually don't like mentioning specific facts that happened to me at work. But work is part of life, so even if this is mostly a personal blog, I cannot help myself but write about certain things that make me think when they happen.

When I end up having conversations such as this, I get really sad: I thought we had finally passed the arrogant period where we had to spoon-feed customers, and I thought we were now mature enough to consider them smart people and providing cool, empowering technologies for them to use. I also thought that pretty much everybody liked Microsoft finally opening up and actually talking TO people… not only talking them INTO buying something, something – but having real conversations.

I get sad when I find that people still don't seem to be accepting that, and wanting back the old model, instead. Kinda weird.


The conversation goes as follows (words are not exactly those – we were speaking Italian and I sort of reconstructed the conversation – you should get the sense of it anyway):



Me: "The SDK service allows you to do quite a lot of cool stuff. Unfortunately not all of that functionality is completely or always easily exposed in the GUI. That is, for example: it is very EASY to define overrides, but it can get very tricky to find them back once set. That's why you can use this little useful tool that the developer of that SDK service has posted on his blog…"

Cust: "…but we can't just read blogs here and there!"

Me: "Well, I mean, then you may have to wait for the normal release cycle. It might be that those improvements will make it in to the product. That might happen in months, if you are lucky, or maybe never. What's wrong if he publishes that on his blog, bypassing the bureaucracy crap, and makes your life easier with it RIGHT NOW?"

Cust: "It is not official, I want it in the product!"

Me: "I see, and even understand that. But right now that feature just isn't there. But you can use this tool to have it. Don't worry: it is not made by some random guy who wants to trojan your server! It is made by the very same developer who wrote the product itself…"

Cust: "It is not supported, what if it breaks something?"

Me: "So are all resource kit tools, in general. written by some dev guy in his free five minutes, and usually unsupported. Still very useful, though. Most of them. And they usually do work, you know that much, don't you?"

Cust: "But why on a blog?"

Me: "What's wrong with this? People are just trying to make customer's life easier by being transparent and open and direct in their communication, just talking RIGHT to the customers. People talking to people, bypassing the prehistoric bureaucracy structure of companies… the same happens on many other sites, just think for example… those are just tools that a support guy like me has written and wants to share because they might be useful…"

Cust: "But I can't follow/read all the blogs out there! I don't have time for it"

Me: "Why not? I have thousands of feeds in my aggregator and…"

Cust: "I don't have time and I don't want to read them, because I pay for support, so I don't expect this stuff to be in blogs"

Me: "Well, I see, since you pay for support, you are paying ME – in fact I am working with you on this product precisely as part of that paid support. That's why I am here to tell you that this tool exists, in case you had not heard of it, so you actually know about it without having to read that yourself on any blog… does that sound like a deal? Where's the issue?"

Cust: "Sgrunt. I want something official, I don't like this blog stuff"



I thought this was particularly interesting, not because I want to make fun of this person. I do respect him and I think he just has a different point of view. But in my opinion this conversation shows (and made me think about) an aspect of that "generation gap" inside Microsoft that Hugh talks about here:

"[…]4.30 Hugh talks about a conversation he had with a few people inside Microsoft- how there’s a generation gap growing within the company, between the Old Guard, and the new generation of Microsofties, who see their company in much more open, organic terms.[…]"

Basically this tells me that the generation gap is not happening only INSIDE Microsoft: it invests our customers too. Which makes it even more difficult to talk to some of them, as we change. Traditions are hard to change.

Looking at OpsMgr2007 Alert trend with Command Shell

Friday, January 25th, 2008

It's friday night, I am quite tired and I can't be asked of writing a long post. But I have not written much all week, not even updated my Twitter, and now I want to finish the week with at least some goodies. So this is the turn of a couple of Powershell commands/snippets/scripts that will count alerts and events generated each day: this information could help you understand the trends of events and alerts over time in a Management Group. It is nothing fancy at all, but they can still be useful to someone out there. In the past (MOM 2005) I used to gather this kind of information with SQL Queries against the operations database. But now, with Powershell, everything is exposed as objects and it is much easier to get information without really getting your hands dirty with the database :-)

#Number of Alerts per day

$alerttimes = Get-Alert | Select-Object TimeRaised

foreach ($datetime in $alerttimes){
$array += $

$array | Group-Object Date

#Number of Events per day

$eventtimes = Get-Event | Select-Object TimeGenerated

foreach ($datetime in $eventtimes){
$array += $

$array | Group-Object Date

Beware that these "queries" might take a long time to execute (especially the events one) depending on the amount of data and your retention policy.

This is of course just scratching the surface of the amount of amazing things you can do with Powershell in Operations Manager 2007. For this kind of information you might want to keep an eye on the official "System Center Operations Manager Command Shell" blog:


Monday, January 14th, 2008

A while ago, talking to some friends, I was mentioning how cool it was that Flickr provides APIs, so that you can always get your data out of it, if you want to. There are several downloader applications that I found on the Internet, but I have not yet chosen one that I completey like among the few that I've tried. So, inspired by Kosso's PHP script for enumerating your photos on Flickr, I thought I'd port it to Powershell and make my own version of it. Just for the fun of it. My Powershell script does not do everything that Kosso's one does: I don't build a web page showing description and comments. I suppose this is because the original script was made with PHP, which you usually run on a web server and outputting as HTML is the standard thing you would do in PHP. I just concentrated on the "download" thing, since mine it is a console script. You can think of mine as a "full backup" script. Full… well, at least of all your photos, if not of all the metadata. It should be trivial to extend anyway, also considering Powershell XML type accelerator really makes it extremely easy to parse the output of a REST API such as Flickr's (I would say even easier and more readable that PHP'simplexml). There is a ton of things that could be extended/improved in the script… including supporting proxy servers, accepting more parameters for things that are now hardcoded… and with a million other things. Even this way, though, I think that the script can be useful to show a number of techniques in Powershell. Or just to download your photos :-) So you can download the script from here: Get-FlickrPhotos.ps1


Friday, January 4th, 2008

I just read from Jeffrey Snover about this newly born Italian PowerShell community site.

I just created an account for myself on the site… as you know I like PowerShell, so even if I usually prefer writing stuff in english, I will try to hang out there and see how can I contribute to it.

After all, I am italian… :-)

Simply Works

Thursday, December 27th, 2007

Simply Works

Simply Works, uploaded by Daniele Muscetta on Flickr.

I don't know about other people, but I do get a lot to think when the end of the year approaches: all that I've done, what I have not yet done, what I would like to do, and so on…

And it is a period when memories surface.

I found the two old CD-ROMs you can see in the picture. And those are memories.
missioncritical software was the company that invented a lot of stuff that became Microsoft's products: for example ADMT and Operations Manager.

The black CD contains SeNTry, the "enterprise event manager", what later became Operations Manager.
On the back of the CD, the company motto at the time: "software that works simply and simply works".
So true. I might digress on this concept, but I won't do that right now.

I have already explained in my other blog what I do for work. Well, that was a couple of years ago anyway. Several things have changed, and we are moving towards offering services that are more measurable and professional. So, since it happens that in a certain job you need to be an "expert" and "specialize" in order to be "seen" or "noticed".
You know I don't really believe in specialization. I have written it all over the place. But you need to make other people happy as well and let them believe what they want, so when you "specialize" they are happier. No, really, it might make a difference in your carrer :-)

In this regard, I did also mention my "meeting again" with Operations Manager.
That's where Operations manager helped me: it let me "specialize" in systems and applications management… a field where you need to know a bit of everything anyway: infrastructure, security, logging, scripting, databases, and so on… :-)
This way, everyone wins.

Don't misunderstand me, this does not mean I want to know everything. One cannot possibly know everything, and the more I learn the more I believe I know nothing at all, to be honest. I don't know everything, so please don't ask me everything – I work with mainframes :-)
While that can be a great excuse to avoid neighbours and relatives annoyances with their PCs though, on the serious side I still believe that any intelligent individual cannot be locked into doing a narrow thing and know only that one bit just because it is common thought that you have to act that way.

If I would stop where I have to stop I would be the standard "IT Pro". I would be fine, sure, but I would get bored soon. I would not learn anything. But I don't feel I am the standard "IT Pro". In fact, funnily enough, on some other blogs out there I have been referenced as a "Dev" (find it on your own, look at their blogrolls :-)). But I am not a Dev either then… I don't write code for work. I would love to, but I rarely actually do, other than some scripts. Anyway, I tend to escape the definition of the usual "expert" on something… mostly because I want to escape it. I don't see myself represented by those generalization.

As Phil puts it, when asked "Are software developers – engineers or artists?":

"[…] Don’t take this as a copout, but a little of both. I see it more as craftsmanship. Engineering relies on a lot of science. Much of it is demonstrably empirical and constrained by the laws of physics. Software is less constrained by physics as it is by the limits of the mind. […]"

Craftmanship. Not science.
And stop calling me an "engineer". I am not an engineer. I was even crap in math, in school!

Anyway, what does this all mean? In practical terms, it means that in the end, wether I want it or not, I do get considered an "expert" on MOM and OpsMgr… and that I will mostly work on those products for the next year too. But that is not bad, because, as I said, working on that product means working on many more things too. Also, I can point to different audiences: those believing in "experts" and those going beyond schemes. It also means that I will have to continue teaching a couple of scripting classes (both VBScript and PowerShell) that nobody else seems to be willing to do (because they are all *expert* in something narrow), and that I will still be hacking together my other stuff (my facebook apps, my wordpress theme and plugins, my server, etc) and even continue to have strong opinions in those other fields that I find interesting and where I am not considered an *expert* 😉

Well, I suppose I've been ranting enough for today…and for this year :-)
I really want to wish everybody again a great beginning of 2008!!! What are you going to be busy with, in 2008 ?

ITPro vs. Dev: there is no such a thing.

Tuesday, September 11th, 2007

Dave Winer wisely writes:

[…] I've been pushing the idea that every app should be a platform for a long time, that in addition to a user interface, every app should have a programmatic interface. For me the idea came from growing up using Unix in the 70s, where every app is a toolkit and the operating system is a scripting language. Wiring things together is an integral part of being a Unix user. It's why programmers like Unix so much […]

It is entirely true. The limits are blurry, IMHO. In the Unix world it is common to find full-fledged "applications" which have been written by the ground up by people that were doing SysAdmin tasks, and those "applications" are usually just… scripts. Simple shell scripts, or something more evolved (PERL, PHP, Python) it does not really matter.

I am so tired of the division traditionally made in the Microsoft world between "Developers" and "IT Professionals". We even have separate sites for the two audiences: MSDN and Technet. There are separate "TechED" events: for"Devs" and for "IT Pros". There are blogs that are divided among the two "audiences"…

There aren't two different audiences, really. There are people, with various degrees of expertise. There is no such a thing as a "developer" if he doesn't know a bit how the underlying system works. His code is gonna suck. And there is not such a thing such a "IT Pro" that builds and integrates and manages systems if he does not have the palest idea of how things work "behind the GUI". He's gonna screw things up regardless of how many step-by-step (click-by-click ?) procedures you spoon feed him.

That's why automation and integration are best done by people who know how to write a bit code.

The PowerShell folk GET IT.

Powershell and RegExp: a "match" made my day.

Thursday, August 9th, 2007

Today I was working with a customer and friend (Claudio Latini, who I thank for the permission to post this, which is also work of his brain – especially the regular expression you'll see reading on!).

We are running several projects and activities together and, among several other things, he's in the process of migrating his users from Exchange 2003 to Exchange 2007. In this infrastructure, he has some ISA Server that publish both the Exchange2003 and the Exchange2007 frontends.

Now he wanted to know HOW MANY and WHICH ONES of his users actually have a PocketPC or other WIndows Mobile device and were actively connecting to the old FrontEnd. You give out mobile devices to people but those things are usually less "managed" – when compared to corporate PCs, at least. So you loose a bit control of the thing… usually people with mobile devices using ActiveSync in companies are managers, and especially since some of them might be on holiday at the moment, it was important to know WHO were the people that had to be told to reconfigure their device to point to the new name/server BEFORE he would start complaining about ActiveSync not working anymore…

So how do you figure out who's connecting ?

I am NO Exchange expert whatsoever… but a thing that came in handy was the thing that an ISA Server was reverse-publishing the frontend server. I know ISA (and firewalls/proxies in general) much better than Exchange, so I can help on that side. In the log files, ActiveSync Connections looked like the following URL, passing most parameters in the POST request: (and on an unrelated note: yes, if you try to crawl this link, you are a bot :-))

So we exported ISA logs (there are several tools for this, including "Extract logs", but we did not use a script, we just used a filter for the correct publishing rule in the "Monitoring – Logging" tag in ISA Server Console and then copied and pasted those log lines) and tried to see if PowerShell could help tackle the issue.

Here we load our sample log (in a real log you would have much more information – each single line wrapping several console rows; I cut it short to the URL to make it more readable.

PS> get-content log.txt

We know Get-Content does not just display the file, it loads the file into a string array.

So we can cycle through the file and try to extract (using a regexp) the string after "User=" and before the first ampersand ("&"), which translates in the following regular expression:


(the regexp has been the most difficult thing to figure out, but it is very worth the hassle once you've done it…)

PS> get-content log.txt | foreach {$_ -match "User=(?<nome>.*?)&" | out-null; $matches}
Name                           Value
----                           -----
nome                           Mario
0                              User=Mario& nome                           Gino
0                              User=Gino&
nome                           Antonio
0                              User=Antonio&
nome                           Antonio
0                              User=Antonio&
nome                           Gino
0                              User=Gino&
nome                           Antonio
0                              User=Antonio&
nome                           Antonio
0                              User=Antonio&
nome                           Mario
0                              User=Mario&
nome                           Mario
0                              User=Mario&
nome                           Mario
0                              User=Mario&
nome                           Mario
0                              User=Mario&
nome                           Antonio
0                              User=Antonio&
nome                           Antonio
0                              User=Antonio&
nome                           Mario
0                              User=Mario&
nome                           Antonio
0                              User=Antonio&
nome                           Antonio
0                              User=Antonio&
nome                           Mario
0                              User=Mario&
nome                           Antonio
0                              User=Antonio&
nome                           Antonio
0                              User=Antonio&
nome                           Mario
0                              User=Mario&
nome                           Antonio
0                              User=Antonio&
nome                           Antonio
0                              User=Antonio&
nome                           Mario
0                              User=Mario&
nome                           Mario
0                              User=Mario&

This seems to work. Now we only have to get the Named Captures called "nome" (containing the user name):

PS> get-content log.txt | foreach {$_ -match "User=(?<name>.*?)&" | out-null; $matches["name"]}
Antonio Antonio

Awesome. Now sort them and remove duplicates. Which is one more command in our pipeline:

get-content log.txt | foreach {$_ -match "User=(?<nome>.*?)&" | out-null; $matches["nome"]} | sort-object -uniq

P> get-content log.txt | foreach {$_ -match "User=(?<name>.*?)&" | out-null; $matches["name"]} | sort-object -uniq


Now you can call those three users and tell them to modify their ActiveSync configuration :-)

Death by right-click -> Delete ? Nope. PowerShell.

Wednesday, May 30th, 2007

So at one stage I was testing the RSS reader capabilities of Outlook 2007, and I imported an OPML file with roughly 500 feeds! Of course I was NOT interested in reading ALL of them, and it was causing quite a bit of work to do on my machine to fetch them all and sync the content in my mailbox…

So I figured out it was possible to remove the subscription (from the Tools menu -> Account Settings -> RSS Feeds) but the folders were left there. Now, I didn't want to have those 500 folders in my mailbox, and I did not even want to die by right-clicking, pressing "delete", confirming…. all of this 500 times! No way.

So I wrote this little PowerShell script, I guess it *might* be helpful to someone at one stage, who knows ?

$oApp = New-Object -COM 'Outlook.Application'
$rss = $oApp.GetNamespace("MAPI").GetDefaultFolder("olFolderRssFeeds")
forach ($folder in $rss.Folders)

Please note that if you don't have the Office Interop Assemblies installed on your machine, you can't use the first line. As a result, you will have to change the third line hardcoding the number that represents the RSSFeeds folder, so it would become:

$rss = $oApp.GetNamespace("MAPI").GetDefaultFolder(25)

Note: I found out (later, of course) that there is a much more general post on this subject (that is, automating Outlook through PowerShell):


Wednesday, January 10th, 2007

This is soooo cool! An "Out-Flickr" script for PowerShell:!13469C7B7CE6E911!285.entry


Friday, November 24th, 2006

[Edited again 25th November – Jachym gave me some suggestions and insights on the use of parameters, and I slightly changed/fixed the original code I had posted yesterday. There are still some more things that could be improved, of course, but I'll leave them to the future, next time I'll have time fot it (who knows when that will be?)]

This one is a post regarding my first test writing a cmdlet for PowerShell. After a few days since having change my blog's title to "$daniele.rant | Out-Blog" (where Out-Blog was a fantasy cmdlet name, and the title just meant to mimick PowerShell syntax in a funny way), I stumbled across this wonderful blog post: that describes how to use the assemblies of "Windows Live Writer". Then I saw the light: I could actually implement an "Out-Blog" cmdlet. I am not sure what this could be useful for… but I thought it was funny to experiment with. I followed the HOW TO information on this other blog post to guide me through the coding:

The result is the code that follows. you see is pretty much Boschin's code wrapped into a cmdlet class. Nothing fancy. Just a test. I thought someone might find it interesting. It is provided "AS IS", mainly for educational purpose (MINE, only mine…. I'm the one whose education is being improved, not you :-))



using System;

using System.Collections.Generic;

using System.Text;

using System.Management.Automation;

using WindowsLive.Writer.BlogClient.Clients;

using WindowsLive.Writer.BlogClient;

using WindowsLive.Writer.CoreServices;

using WindowsLive.Writer.CoreServices.Settings;

using WindowsLive.Writer.Extensibility.BlogClient;

using Microsoft.Win32;



namespace LiveWriterCmdlet


[Cmdlet("out", "blog", SupportsShouldProcess=true)]


public sealed class OutBlogCmdlet : Cmdlet


[Parameter(Position = 0, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]


public string Title


get { return _title; }

set { _title = value; }


private string _title;




public string Text


get { return _text; }

set { _text = value; }


private string _text;


[Parameter(Position = 2, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]


public string BlogApiEndPoint


get { return _blogapiendpoint; }

set { _blogapiendpoint = value; }


private string _blogapiendpoint;


[Parameter(Position = 3, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]


public string UserName


get { return _username; }

set { _username = value; }


private string _username;


[Parameter(Position = 4, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]


public string Password


get { return _password; }

set { _password = value; }


private string _password;



[Parameter(Position = 6, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]


public string ProxyAddress


get { return _proxyaddress; }

set { _proxyaddress = value; }


private string _proxyaddress;


[Parameter(Position = 7, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]


public int ProxyPort


get { return _proxyport; }

set { _proxyport = value; }


private int _proxyport;


[Parameter(Position = 8, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public string ProxyUserName


get { return _proxyusername; }

set { _proxyusername = value; }


private string _proxyusername;


[Parameter(Position = 9, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public string ProxyPassword


get { return _proxypassword; }

set { _proxypassword = value; }


private string _proxypassword;


[Parameter(Position = 10, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public SwitchParameter Published


get { return _published; }

set { _published = value; }


private bool _published;





protected override void BeginProcessing()






if ((ProxyAddress != null) | (ProxyAddress != ""))


WebProxySettings.ProxyEnabled = true;

WebProxySettings.Hostname = ProxyAddress;

WebProxySettings.Port = ProxyPort;

WebProxySettings.Username = ProxyUserName;

WebProxySettings.Password = ProxyPassword;





WebProxySettings.ProxyEnabled = false;








protected override void ProcessRecord()


if (ShouldProcess(Text))


ISettingsPersister persister = new RegistrySettingsPersister(Registry.CurrentUser, @"Software\Windows Live Writer");

IBlogCredentials credentials = new BlogCredentials(new SettingsPersisterHelper(persister));

IBlogCredentialsAccessor credentialsAccessor = new BlogCredentialsAccessor("dummy-value", credentials);


credentials.Username = UserName;

credentials.Password = Password;


MovableTypeClient client = new MovableTypeClient(new Uri(BlogApiEndPoint), credentialsAccessor, PostFormatOptions.Unknown);



BlogPost MyPost = new BlogPost();

MyPost.Title = Title;

MyPost.Contents = Text;

client.NewPost("dummy-value", MyPost, Published);


WriteVerbose("Posted Successfully.");