Got Orphaned OpsMgr Objects?

Have you ever wondered what would happen if, in Operations Manager, you’d delete a Management Server or Gateway that managed objects (such as network devices) or has agents pointing uniquely to it as their primary server?

The answer is simple, but not very pleasant: you get ORPHANED objects, which will linger in the database but you won’t be able to “see” or re-assign anymore from the GUI.

So the first thing I want to share is a query to determine IF you have any of those orphaned agents. Or even if you know, since you are not able to “see” them from the console, you might have to dig their name out of the database. Here’s a query I got from a colleague in our reactive support team:


-- Check for orphaned health services (e.g. agent).
declare @DiscoverySourceId uniqueidentifier;
SET @DiscoverySourceId = dbo.fn_DiscoverySourceId_User();
SELECT TME.[TypedManagedEntityid], HS.PrincipalName
FROM MTV_HealthService HS
INNER JOIN dbo.[BaseManagedEntity] BHS WITH(nolock)
ON BHS.[BaseManagedEntityId] = HS.[BaseManagedEntityId]
-- get host managed computer instances
INNER JOIN dbo.[TypedManagedEntity] TME WITH(nolock)
ON TME.[BaseManagedEntityId] = BHS.[TopLevelHostEntityId]
AND TME.[IsDeleted] = 0
INNER JOIN dbo.[DerivedManagedTypes] DMT WITH(nolock)
ON DMT.[DerivedTypeId] = TME.[ManagedTypeId]
INNER JOIN dbo.[ManagedType] BT WITH(nolock)
ON DMT.[BaseTypeId] = BT.[ManagedTypeId]
AND BT.[TypeName] = N'Microsoft.Windows.Computer'
-- only with missing primary
LEFT OUTER JOIN dbo.Relationship HSC WITH(nolock)
ON HSC.[SourceEntityId] = HS.[BaseManagedEntityId]
AND HSC.[RelationshipTypeId] = dbo.fn_RelationshipTypeId_HealthServiceCommunication()
AND HSC.[IsDeleted] = 0
INNER JOIN DiscoverySourceToTypedManagedEntity DSTME WITH(nolock)
ON DSTME.[TypedManagedEntityId] = TME.[TypedManagedEntityId]
AND DSTME.[DiscoverySourceId] = @DiscoverySourceId
WHERE HS.[IsAgent] = 1
AND HSC.[RelationshipId] IS NULL;

Once you have identified the agent you need to re-assign to a new management server, this is doable from the SDK. Below is a powershell script I wrote which will re-assign it to the RMS. It has to run from within the OpsMgr Command Shell.
You still need to change the logic which chooses which agent – this is meant as a starting base… you could easily expand it into accepting parameters and/or consuming an input text file, or using a different Management Server than the RMS… you get the point.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$_.name –like “*Microsoft.SystemCenter.HealthServiceCommunication*”}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“HealthService”)
  6. $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  7. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  8. {
  9.     #the next line should be changed to pick the right agent to re-assign
  10.     if ($obj.DisplayName -match ‘dsxlab’)
  11.     {
  12.                 Write-host $obj.displayname
  13.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  14.                 $cmro.SetSource($obj)
  15.                 $cmro.SetTarget($rms)
  16.                 $imdd.Add($cmro)
  17.                 $imdd.Commit($mc)
  18.     }
  19. }

Similarly, you might get orphaned network devices. The script below is used to re-assign all Network Devices to the RMS. This script is actually something I have had even before the other one (yes, it has been sitting in my “digital drawer” for a couple of years or more…) and uses the same concept – only you might notice that the relation’s source and target are “reversed”, since the relationships are different:

  • the Management Server (source) “manages” the Network Device (target)
  • the Agent (source) “talks” to the Management Server (target)

With a bit of added logic it should be easy to have it work for specific devices.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$_.name –like “*Microsoft.SystemCenter.HealthServiceShouldManageEntity*”}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“NetworkDevice”)
  6. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  7. {
  8.                 Write-host $obj.displayname
  9.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  10.                 $cmro.SetSource($rms)
  11.                 $cmro.SetTarget($obj)
  12.                 $imdd.Add($cmro)
  13.                 $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  14.                 $imdd.Commit($mc)
  15. }

Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

The mystery of the lost registry values

During the OpsMgr Health Check engagement we use custom code to assess the customer’s Management group, as I wrote here already. Given that the customer tells us which machine is the RMS, one of the very first things that we do in our tool is to connect to the RMS’s registry, and check the values under HKLM\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup to see which machine holds the database. It is a rather critical piece of information for us, as we run a number of queries afterward… so we need to know where the db is, obviously 🙂

I learned from here http://mybsinfo.blogspot.com/2007/01/powershell-remote-registry-and-you-part.html how to access registry remotely thru powershell, by using .Net classes. This is also one of the methods illustrated in this other article on Technet Script Center http://www.microsoft.com/technet/scriptcenter/resources/qanda/jan09/hey0105.mspx 

Therefore the “core” instructions of the function I was using to access the registry looked like the following

  1. Function GetValueFromRegistry ([string]$computername, $regkey, $value)   
  2. {  
  3.      $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey(‘LocalMachine’, $computername)  
  4.      $regKey= $reg.OpenSubKey(“$regKey”)  
  5.      $result = $regkey.GetValue(“$value”)  
  6.      return $result 
  7. }  

 

[Note: the actual function is bigger, and contains error handling, and logging, and a number of other things that are unnecessary here]

Therefore, the function was called as follows:
GetValueFromRegistry $RMS “SOFTWARE\\Microsoft\\Microsoft Operations Manager\\3.0\\Setup” “DatabaseServerName”
Now so far so good.

In theory.

 

Now for some reason that I could not immediately explain, we had noticed that this piece of code performing registry accessm while working most of the times, only on SOME occasions was giving errors about not being able to open the registry value…

image

When you are onsite with a customer conducting an assessment, the PFE engineer does not always has the time to troubleshoot the error… as time is critical, we have usually resorted to just running the assessment from ANOTHER machine, and this “solved” the issue… but always left me wondering WHY this was giving an error. I had suspected an issue with permissions first, but it could not be as the permissions were obviously right: performing the assessment from another machine but with the same user was working!

A few days ago my colleague and buddy Stefan Stranger figured out that this was related to the platform architecture:

  • X64 client to x64 RMS was working
  • X64 client to x86 RMS was working
  • X86 client to x86 RMS was working
  • X86 client to x64 RMS was NOT working

You don’t need to use our custom code to reproduce this, REGEDIT shows the behavior as well.

If, from a 64-bit server, you open a remote registry connection to 64-bit RMS server, you can see all OpsMgr registry keys:

clip_image002

If, anyhow, from a 32-bit server, you open a remote registry connection to 64-bit RMS server, you don’t see ALL – but only SOME – OpsMgr registry keys:
clip_image004

So here’s the reason! This is what was happening! How could I not think of this before? It was nothing related to permissions, but to registry redirection! The issue was happening because the 32 bit machine is using the 32bit registry editor and what it will do when accessing a 64bit machine will be to default to the Wow6432Node location in the registry. There all OpsMgr data won’t be in the WOW64 location on a 64bit machine, only some.

So, just like regedit, the 32bit powershell and the 32bit .Net framework were being redirected to the 32bit-compatibility registry keys… not finding the stuff we needed, whereas a 64bit application could find that. Any 32bit application by default gets redirected to a 32bit-safe registry.

So, after finally UNDERSTANDING what the issue was, I started wondering: ok… but how can I access the REAL “HLKM\SOFTWARE\Microsoft” key on a 64bit machine when running this FROM a 32bit machine – WITHOUT being redirected to “HKLM\SOFTWARE\Wow6432Node\Microsoft” ? What if my application CAN deal just fine with those values and actually NEEDs to access them?

The answer wasn’t as easy as the question. I did a bit of digging on this, and still I have NOT yet found a way to do this with the .Net classes. It seems that in a lot of situations, Powershell or even .Net classes are nice and sweet wrappers on the underlying Windows APIs… but for how sweet and easy they are, they are very often not very complete wrappers – letting you do just about enough for most situations, but not quite everything you would or could with the APi underneath. But I digress, here…

The good news is that I did manage to get this working, but I had to resort to using dear old WMI StdRegProvider… There are a number of locations on the Internet mentioning the issue of accessing 32bit registry from 64bit machines or vice versa, but all examples I have found were using VBScript. But I needed it in Powershell. Therefore I started with the VBScript example code that is present here, and I ported it to Powershell.

Handling the WMI COM object from Powershell was slightly less intuitive than in VBScript, and it took me a couple of hours to figure out how to change some stuff, especially this bit that sets the parameters collection:

Set Inparams = objStdRegProv.Methods_(“GetStringValue”).Inparameters

Inparams.Hdefkey = HKLM

Inparams.Ssubkeyname = RegKey

Inparams.Svaluename = RegValue

Set Outparams = objStdRegProv.ExecMethod_(“GetStringValue”, Inparams,,objCtx)

INTO this:

$Inparams = ($objStdRegProv.Methods_ | where {$_.name -eq “GetStringValue”}).InParameters.SpawnInstance_()

($Inparams.Properties_ | where {$_.name -eq “Hdefkey”}).Value = $HKLM

($Inparams.Properties_ | where {$_.name -eq “Ssubkeyname”}).Value = $regkey

($Inparams.Properties_ | where {$_.name -eq “Svaluename”}).Value = $value

$Outparams = $objStdRegProv.ExecMethod_(“GetStringValue”, $Inparams, “”, $objNamedValueSet)

 

I have only done limited testing at this point and, even if the actual work now requires nearly 15 lines of code to be performed vs. the previous 3 lines in the .Net implementation, it at least seems to work just fine.

What follows is the complete code of my replacement function, in all its uglyness glory:

 

  1. Function GetValueFromRegistryThruWMI([string]$computername, $regkey, $value)  
  2. {  
  3.     #constant for the HLKM  
  4.     $HKLM = “&h80000002” 
  5.  
  6.     #creates an SwbemNamedValueSet object
  7.     $objNamedValueSet = New-Object -COM “WbemScripting.SWbemNamedValueSet” 
  8.  
  9.     #adds the actual value that will requests the target to provide 64bit-registry info
  10.     $objNamedValueSet.Add(“__ProviderArchitecture”, 64) | Out-Null 
  11.  
  12.     #back to all the other usual COM objects for WMI that you have used a zillion times in VBScript
  13.     $objLocator = New-Object -COM “Wbemscripting.SWbemLocator” 
  14.     $objServices = $objLocator.ConnectServer($computername,“root\default”,“”,“”,“”,“”,“”,$objNamedValueSet)  
  15.     $objStdRegProv = $objServices.Get(“StdRegProv”)  
  16.  
  17.     # Obtain an InParameters object specific to the method.  
  18.     $Inparams = ($objStdRegProv.Methods_ | where {$_.name -eq “GetStringValue”}).InParameters.SpawnInstance_()  
  19.   
  20.     # Add the input parameters  
  21.     ($Inparams.Properties_ | where {$_.name -eq “Hdefkey”}).Value = $HKLM 
  22.     ($Inparams.Properties_ | where {$_.name -eq “Ssubkeyname”}).Value = $regkey 
  23.     ($Inparams.Properties_ | where {$_.name -eq “Svaluename”}).Value = $value 
  24.  
  25.     #Execute the method  
  26.     $Outparams = $objStdRegProv.ExecMethod_(“GetStringValue”, $Inparams, “”, $objNamedValueSet)  
  27.  
  28.     #shows the return value  
  29.     ($Outparams.Properties_ | where {$_.name -eq “ReturnValue”}).Value  
  30.  
  31.     if (($Outparams.Properties_ | where {$_.name -eq “ReturnValue”}).Value -eq 0)  
  32.     {  
  33.        write-host “it worked” 
  34.        $result = ($Outparams.Properties_ | where {$_.name -eq “sValue”}).Value  
  35.        write-host “Result: $result” 
  36.        return $result 
  37.     }  
  38.     else 
  39.     {  
  40.         write-host “nope” 
  41.     }  
  42. }  

 

which can be called similarly to the previous one:
GetValueFromRegistryThruWMI $RMS “SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup” “DatabaseServerName”

[Note: you don’t need the double\escape backslashes here, compared to the .Net implementation]

Enjoy your cross-architecture registry access: from 32bit to 64bit – and back!

Get-WmiCustom (aka: Get-WMIObject with timeout!)

I make heavy use of WMI.

But when using it to gather information from customer’s machines for assessments, I sometimes find the occasional broken WMI repository. There are a number of ways in which WMI can become corrupted and return weird results. Most of the times you would just get errors, such as “Class not registered” or “provider load failure”. I can handle those errors from within scripts.

But there are some, more subtle – and annoying – ways in which the WMI repository can get corrupted. the situations I am talking about are the ones when WMI will accept your query… will say it is executing it… but it will never actually return any error, and just stay stuck performing your query forever. Until your client application decides to time out. Which in some cases does not happen.

Now that was my issue – when my assessment script (which was using the handy Powershell Get-WmiObject cmdlet) would hit one of those machines… the whole script would hang forever and never finish its job. Ok, sure, the solution to this would be actually FIXING the WMI repository and then try again. But remember I am talking of an assessment: if the information I am getting is just one piece of a bigger puzzle, and I don’t necessarily care about it and can continue without that information – I want to be able to do it, to skip that info, maybe the whole section, report an error saying I am not able to get that information, and continue to get the remaining info. I can still fix the issue on the machine afterward AND then run the assessment script again, but in the first place I just want to get a picture of how the system looks like. With the good and with the bad things. Especially, I do want to take that whole picture – not just a piece of it.

Unfortunately, the Get-WmiObject cmdlet does not let you specify a timeout. Therefore I cooked my own function which has a compatible behaviour to that of Get-WmiObject, but with an added “-timeout” parameter which can be set. I dubbed it “Get-WmiCustom”

Function Get-WmiCustom([string]$computername,[string]$namespace,[string]$class,[int]$timeout=15)
{
$ConnectionOptions = new-object System.Management.ConnectionOptions
$EnumerationOptions = new-object System.Management.EnumerationOptions

$timeoutseconds = new-timespan -seconds $timeout
$EnumerationOptions.set_timeout($timeoutseconds)

$assembledpath = “\\” + $computername + “\” + $namespace
#write-host $assembledpath -foregroundcolor yellow

$Scope = new-object System.Management.ManagementScope $assembledpath, $ConnectionOptions
$Scope.Connect()

$querystring = “SELECT * FROM ” + $class
#write-host $querystring

$query = new-object System.Management.ObjectQuery $querystring
$searcher = new-object System.Management.ManagementObjectSearcher
$searcher.set_options($EnumerationOptions)
$searcher.Query = $querystring
$searcher.Scope = $Scope

trap { $_ } $result = $searcher.get()

return $result
}

You can call it as follows, which is similar to how you would call get-WmiObject

get-wmicustom -class Win32_Service -namespace “root\cimv2” -computername server1.domain.dom

or, of course, specifying the timeout (in seconds):

get-wmicustom -class Win32_Service -namespace “root\cimv2” -computername server1.domain.dom –timeout 1

and obviously, since the function returns objects just like the original cmdlet, it is also possible to pipe them to other commands:

get-wmicustom -class Win32_Service -namespace “root\cimv2” -computername server1.domain.dom –timeout 1 | Format-Table

Early Adoptions, Health Checks and New Year Rants.

Generations

Two days ago I read the following Tweet by Hugh MacLeod:

“[…] Early Adopter Problem: How to differentiate from the bandwagon, once the bandwagon starts moving faster than you are […]”

That makes me think of early adoption of a few technologies I have been working with, and how the community around those evolved. For example:

Operations Manager… early adoption meant that I have been working with it since the beta, had posted one of the earliest posts about how to use a script in a Unit Monitor back in may 2007 (the product was released in April 2007 and there was NO documentation back then, so we had to really try to figure out everything…), but someone seems to think it is worth repeating the very same lesson in November 2008, with not a lot of changes, as I wrote here. I don’t mean being rude to Anders… repeating things will surely help the late adopters finding the information they need, of course.

Also, I started playing early with Powershell. I posted my first (and only) cmdlet back in 2006. It was not a lot more than a test for myself to learn how to write one, but that’s just to say that I started playing early with it. I have been using it to automate tasks for example.

Going back to the quote above, everyone gets on the bandwagon posting examples and articles. I had been asked a few times about writing articles on OpsMgr and Powershell usage (for example by www.powershell.it) but I declined, as I was too busy using this knowledge to do stuff for work (where “work” is defined as in “work that pays your mortgage”), rather than seeking personal prestige through articles and blogs. Anyway, that kind of articles are appearing now all over the Internet and the blogosphere now. The above examples made me think of early adoption, and the bandwagon that follows later on… but even as an early adopter, I was never very noisy or visible.

Now, going back to what I do for work, (which I mentioned here and here in the past), I work in the Premier Field Engineering organization of Microsoft Services, which provides Premier services to customers. Microsoft Premier customer have a wide range of Premier agreement features and components that they can use to support their people, improve their processes, and improve the productive use of the Microsoft technology they have purchased. Some of these services we provide are known to the world as “Health Checks”, some as “Risk Assessment Programs” (or, shortly, RAPs). These are basically services where one of our technology experts goes on the customer site and there he uses a custom, private Microsoft tool to gather a huge amount of data from the product we mean to look at (be it SQL, Exchange, AD or anything else….). The Health Check or RAP tool collects the data and outputs a draft of the report that will be delivered to the customer later on, with all the right sections and chapters. This is done so that every report of the same kind will look consistent, even if the engagement is performed by a different engineer in a different part of the world. The engineer will of course analyze the collected data and write recommendations about what is configured properly and/or about what could or should be changed and/or improved in the implementation to make it adhere to Best Practices. To make sure only the right people actually go onsite to do this job we have a strict internal accreditation process that must be followed; only accredited resources that know the product well enough and know exactly how to interpret the data that the tool collects are allowed to use it and to deliver the engagement, and present/write the findings to the customer.

So why am I telling you this here, and how have I been using my early knowledge of OpsMgr and Powershell for ?

I have used that to write the Operations Manager Health Check, of course!

We had a MOM 2005 Health Check already, but since the technology has changed so much, from MOM to OpsMgr, we had to write a completely new tool. Jeff  (the original MOM2005 author, who does not have a blog that I can link to) and me are the main coders of this tool… and the tool itself is A POWERSHELL script. A longish one, of course (7000 lines, more or less), but nothing more than a Powershell script, at the end of the day. There are a few more colleagues that helped shape the features and tested the tool, including Kevin Holman. Some of the database queries on Kevin’s blog are in fact what we use to extract some of the data (beware that some of those queries have recently been updated, in case you saved them and using your local copy!), while some other information are using internal and/or custom queries. Some other times we use OpsMgr cmdlets or go to the SDK service, but a lot of times we query the database directly (we really should use the SDK all the times, but for certain stuff direct database access is way faster). It took most of the past year to write it, test it, troubleshoot it, fix it, and deliver the first engagements as “beta” to some customers to help iron out the process… and now the delivery is available! If a year seems like a long time, you have to consider this is all work that gets done next to what we all have to normally do with customers, not replacing it (i.e. I am not free to sit on my butt all day and just write the tool… I still have to deliver services to customers day in day out, in the meantime).

Occasionally, during this past calendar year, that is approaching its end, I have been willing and have found some extra time to disclose some bits and pieces, techniques and prototypes of how to use Powershell and OpsMgr together, such as innovative ways to use Powershell in OpsMgr against beta features, but in general most of my early adopter’s investment went into the private tool for this engagement, and that is one of the reasons I couldn’t blog or write much about it, being it Microsoft Intellectual Property.

But it is also true that I did not care to write other stuff when I considered it too easy or it could be found in the documentation. I like writing of ideas, thoughts, rants OR things that I discover and that are not well documented at the time I study them… so when I figure out things I might like leaving a trail for some to follow. But I am not here to spoon feed people like some in the bandwagon are doing. Now the bandwagon is busy blogging and writing continuously about some aspect of OpsMgr (known or unknown, documented or not), and the answer to the original question of Hugh is, in my opinion, that it does not really matter what the bandwagon is doing right now. I was never here to do the same thing. I think that is my differentiator. I am not saying that what a bunch of colleagues and enthusiasts is doing is not useful: blogging and writing about various things they experiment with is interesting and it will be useful to people. But blogs are useful until a certain limit. I think that blogs are best suited for conversations and thoughts (rather than for “howto’s”), and what I would love to see instead is: less marketing hype when new versions are announced and more real, official documentation.

But I think I should stop caring about what the bandwagon is doing, because that’s just another ego trip at the end of the day. What I should more sensibly do, would be listening to my horoscope instead:

[…] “How do you slay the dragon?” journalist Bill Moyers asked mythologist Joseph Campbell in an interview. By “dragon,” he was referring to the dangerous beast that symbolizes the most unripe and uncontrollable part of each of our lives. In reply to Moyers, Campbell didn’t suggest that you become a master warrior, nor did he recommend that you cultivate high levels of sleek, savage anger. “Follow your bliss,” he said simply. Personally, I don’t know if that’s enough to slay the dragon — I’m inclined to believe that you also have to take some defensive measures — but it’s definitely worth an extended experiment. Would you consider trying that in 2009? […]

Backup or Store stuff to GMail via IMAP in Ruby

Once upon a time, I used to store some automated small backups into GMail just by having the scheduled backup send an email to my GMail account. At one stage they blocked me from doing so, marking those repeated email as SPAM.

After that, I took a different approach: I kept sending the mail on the SAME server as the backup, and using IMAP I could DRAG-and-DROP the backup attachment from the mailbox on one server to the mailbox on another server (=GMail). They did not mark me as a spammer that way, of course.
So that worked for a while, but then I got tired of doing this manually.

So the following ruby script is the way I automated the “move offsite” part of that backup.
For completeness, I will give the due credits about who set me on the right track: I started off by this example by Ryan.

#!/usr/bin/env ruby
begin_ = Time.now

#includes
require 'net/imap'

##Source Info
$SRCSERVER="mail.muscetta.com"
$SRCPORT=143
$SRCSSL=false
$SRCUSERNAME="daniele"
$SRCPASSWORD=""
$SRCFOLDER="INBOX.Backups"

##Destination Info
$DSTSERVER="imap.gmail.com"
$DSTPORT=993
$DSTSSL=true
$DSTUSERNAME="muscetta@gmail.com"
$DSTPASSWORD=""
$DSTFOLDER="Backup"

#connect to source
puts "connecting to source server #{$SRCSERVER}... nn"
srcimap = Net::IMAP.new($SRCSERVER,$SRCPORT,$SRCSSL)
srcimap.login($SRCUSERNAME, $SRCPASSWORD)
srcimap.select($SRCFOLDER)

#connect to destination
puts "connecting to destination server #{$DSTSERVER}... nn"
dstimap = Net::IMAP.new($DSTSERVER,$DSTPORT,$DSTSSL)
dstimap.login($DSTUSERNAME, $DSTPASSWORD)
dstimap.select($DSTFOLDER)

# Loop through all messages in the source folder.
uids = srcimap.uid_search(['ALL'])
if uids.length > 0
	$count = uids.length
	puts "found #{$count} messages to move... nn"

	srcimap.uid_fetch(uids, ['ENVELOPE']).each do |data|
		mid = data.attr['ENVELOPE'].message_id

		# Download the full message body from the source folder.
		puts "reading message... #{mid}"
		msg = srcimap.uid_fetch(data.attr['UID'], ['RFC822', 'FLAGS', 'INTERNALDATE']).first

		# Append the message to the destination folder, preserving flags and internal timestamp.
		puts "copying message #{mid} to destination..."
		dstimap.append($DSTFOLDER, msg.attr['RFC822'], msg.attr['FLAGS'], msg.attr['INTERNALDATE'])

		#delete the msg
		puts "deleting messsage #{mid}..."
		srcimap.uid_store(data.attr['UID'], '+FLAGS', [:Deleted])
		srcimap.expunge

	end

	#disconnect
	dstimap.close
	srcimap.close
end

total_time = Time.now - begin_
puts "Done. RunTime: #{total_time} sec. nn"

Create a Script-Based Unit Monitor in OpsMgr2007 via the GUI

Warning for people who landed here: this post is VERY OLD. It was written in the early days of struggling with OpsMgr 2007, and when nobody really knew how to do things.
I found that this way was working – and it surely does – but what is described here is NOT the recommended way to do things nowadays. This post was only meant to fill in a gap I was feeling existed, back in 2007.
But as time passes, and documentation gets written, knowledge improves.
Therefore, I recommend you read the newly released Composition chapter of the MP Authoring Guide instead
http://technet.microsoft.com/en-us/library/ff381321.aspx – and start building your custom modules to embed scripts as Brian Wren describes in there, so that you can share them between multiple rules and monitors.

This said, below is the original post.

Create a Script-Based Unit Monitor in OpsMgr2007 via the GUI

There is not a lot of documentation for System Center Operations Manager 2007 yet.
It is coming, but there’s a lot of things that changed since the previous release and I think some more would only help. Also, a lot of the content I am seeing is either too newbie-oriented or too developer-oriented, for some reason.

I have not yet seen a tutorial, webcast or anything that explains how to create a simple unit monitor that uses a VBS script using the GUI.

So this is how you do it:

Go to the “Authoring” space of OpsMgr 2007 Operations Console.
Select the “Management Pack objects”, then “Monitors” node. Right click and choose “Create a monitor” -> “Unit Monitor”.

You get the “Create a monitor” wizard open:
wizard02

Choose to create a two-states unit monitor based on a script. Creating a three- state monitor would be pretty similar, but I’ll show you the most simple one.
Also, choose a Management pack that will contain your script and unit monitor, or create a new management pack.
wizard03

Choose a “monitor target” (object classes or instances – see this webcast about targeting rules and monitors: www.microsoft.com/winme/0703/28666/Target_Monitoring_Edit… ) and the aggregate rollup monitor you want to roll the state up to.

Choose a schedule, that is: how often would you like your script to run. For demonstration purposes I usually choose a very short interval such a two or three minutes. For production environments, tough, choose a longer time range.
wizard04

Choose a name for your script, complete with a .VBS extension, and write the code of the script in the rich text box:
wizard05

As the sample code and comments suggest, you should use a script that checks for the stuff you want it to check, and returns a “Property Bag” that can be later interpreted by OpsMgr workflow to change the monitor’s state.
This is substantially different than scripting in MOM 2005, where you could only launch scripts as responses, loosing all control over their execution.

For demonstration purpose, use the following script code:
 

On Error Resume Next
Dim oAPI, oBag
Set oAPI = CreateObject(“MOM.ScriptAPI”)
Set oBag = oAPI.CreateTypedPropertyBag(StateDataType)
Const FOR_APPENDING = 8
strFileName = “c:\testfolder\testfile.txt”
strContent = “test ”
Set objFS = CreateObject(“Scripting.FileSystemObject”)
Set objTS = objFS.OpenTextFile(strFileName,FOR_APPENDING)
If Err.Number <> 0 Then
Call oBag.AddValue(“State”,”BAD”)
Else
Call oBag.AddValue(“State”,”GOOD”)
objTS.Write strContent
End If
Call oAPI.Return(oBag)

[edited on 29th of May as pointed out by Ian: if you cut and paste the example script you might need to change the apostrophes (“) as that causes the script to fail when run – it is an issue with the template of this blog.] [edited on 30th of May: I fixed the blog so that now post content shows just plain, normal double quotes instead than fancy ones. It seems like a useful thing when from time to time I post code…]

The script will try to write into the file c:\testfolder\testfile.txt.
If it finds the file and manages to write (append text) to it, it will return the property “State” with a value of “GOOD”.
If it fails (for example if the file does not exist), it will return the property “State” with a value of “BAD”.

In MOM 2005 you could only let script generate Events or Alerts directly as a mean to communicate their results back to the monitoring engine. In OpsMgr 2007 you can let your script spit out a property bag and then continue the monitoring workflow and decide what to do depending on the script’s result.

wizard06

So the next step is to go and check for the value of the property we return in the property bag, to determine which status the monitor will have to assume.

We use the syntax Property[@Name=’State’] in the parameter field, and we search for a content that means an unhealthy condition:

wizard07

Or for the healty one:
wizard08

Then we decide which status will the monitor have to assume in the healty and unhealty conditions (Green/Yellow or Green/Red usually)
wizard09

Optionally, we can decide to raise an Alert when the status changes to unhealthy, and close it again when it goes back to healty.

wizard10

Now our unit monitor is done.
All we have to do is waiting it gets pushed down to the agent(s) that should execute it, and wait for its status to change.
In fact it should go to the unhealthy state first.
To test that it works, just create the text file it will be searching for, and wait for it to run again, and the state should be reset to Healthy.

Have fun with more complex scripts!

On this website we use first or third-party tools that store small files (cookie) on your device. Cookies are normally used to allow the site to run properly (technical cookies), to generate navigation usage reports (statistics cookies) and to suitable advertise our services/products (profiling cookies). We can directly use technical cookies, but you have the right to choose whether or not to enable statistical and profiling cookies. Enabling these cookies, you help us to offer you a better experience. Cookie and Privacy policy