Archive for the 'OpsMgr2007' Category

RSS Feed for the 'OpsMgr2007' Category

Three quarters of 2015, my IT career and various ramblings

Monday, October 5th, 2015

September is over. The first three quarters of 2015 are over.
This has been a very important year so far – difficult, but revealing. Everything has been about change, healing and renewal.

We moved back to Europe first, and you might have now also read my other post about leaving Microsoft, more recently.

This was a hard choice – it took many months to reach the conclusion this is what I needed to do.

Most people have gone thru strong programming: they think you have to be 'successful' at something. Success is externally defined, anyhow (as opposed to satisfaction which we define ourselves) and therefore you are supposed to study in college a certain field, then use that at work to build your career in the same field… and keep doing the same thing.

I was never like that – I didn't go to college, I didn't study as an 'engineer'. I just saw there was a market opportunity to find a job when I started, studied on the job, eventually excelled at it. But it never was *the* road. It just was one road; it has served me well so far, but it was just one thing I tried, and it worked out.
How did it start? As a pre-teen, I had been interested in computers, then left that for a while, did 'normal' high school (in Italy at the time, this was really non-technological), then I tried to study sociology for a little bit – I really enjoyed the Cultural Anthropology lessons there, and we were smoking good weed with some folks outside of the university, but I really could not be asked to spend the following 5 or 10 years or my life just studying and 'hanging around' – I wanted money and independence to move out of my parent's house.

So, without much fanfare, I revived my IT knowledge: upgraded my skill from the 'hobbyist' world of the Commodore 64 and Amiga scene (I had been passionate about modems and the BBS world then), looked at the PC world of the time, rode the 'Internet wave' and applied for a simple job at an IT company.

A lot of my friends were either not even searching for a job, with the excuse that there weren't any, or spending time in university, in a time of change, where all the university-level jobs were taken anyway so that would have meant waiting even more after they had finished studying… I am not even sure they realized this until much later.
But I just applied, played my cards, and got my job.

When I went to sign it, they also reminded me they expected hard work at the simplest and humblest level: I would have to fix PC's, printers, help users with networking issues and tasks like those – at a customer of theirs, a big company.
I was ready to roll up my sleeves and help that IT department however I would be capable of, and I did.
It all grew from there.

And that's how my IT career started. I learned all I know of IT on the job and by working my ass off and studying extra hours and watching older/more expert colleagues and making experience.

I am not an engineer.
I am, at most, a mechanic.
I did learn a lot of companies and the market, languages, designs, politics, the human and technical factors in software engineering and the IT marketplace/worlds, over the course of the past 18 years.

But when I started, I was just trying to lend a honest hand, to get paid some money in return – isn't that what work was about?

Over time IT got out of control. Like Venom, in the Marvel comics, that made its appearance as a costume that SpiderMan started wearing… and it slowly took over, as the 'costume' was in reality some sort of alien symbiotic organism (like a pest).

You might be wondering what I mean. From the outside I was a successful Senior Program Manager of a 'hot' Microsoft product.
Someone must have mistaken my diligence and hard work for 'talent' or 'desire of career' – but it never was.
I got pushed up, taught to never turn down 'opportunities'.

But I don't feel this is my path anymore.
That type of work takes too much metal energy off me, and made me neglect myself and my family. Success at the expense of my own health and my family's isn't worth it. Some other people wrote that too – in my case I stopped hopefully earlier.

So what am I doing now?

First and foremost, I am taking time for myself and my family.
I am reading (and writing)
I am cooking again
I have been catching up on sleep – and have dreams again
I am helping my father in law to build a shed in his yard
We bought a 14-years old Volkswagen van that we are turning into a Camper
I have not stopped building guitars – in fact I am getting setup to do it 'seriously' – so I am also standing up a separate site to promote that activity
I am making music and discovering new music and instruments
I am meeting new people and new situations

There's a lot of folks out there who either think I am crazy (they might be right, but I am happy this way), or think this is some sort of lateral move – I am not searching for another IT job, thanks. Stop the noise on LinkedIn please: I don't fit in your algorithms, I just made you believe I did, all these years.

Repost: Useful SetSPN tips

Wednesday, October 19th, 2011

I just saw that my former colleague (PFE) Tristan has posted an interesting note about the use of SetSPN “–A” vs SetSPN “–S”. I normally don’t repost other people’s content, but I thought this would be useful as there are a few SPN used in OpsMgr and it is not always easy to get them all right… and you can find a few tricks I was not aware of, by reading his post.

Check out the original post at http://blogs.technet.com/b/tristank/archive/2011/10/10/psa-you-really-need-to-update-your-kerberos-setup-documentation.aspx

I have been chosen; Farewell my friends…

Thursday, July 7th, 2011

I have been in Premier Field Engineering for nearly 7 years (it was not even called PFE when I joined – it was just "another type of support"…) and I have to admit that it has been a fun, fun ride: I worked with awesome people and managed to make a difference with our products and services for many customers – directly working with some of those customers, as well as indirectly thru the OpsMgr Health Check program – the service I led for the last 3+ years, which nowadays gets delivered hundreds of times a year around the globe by my other fellow PFEs.

But it is time to move on: I have decided to go thru a big life change for me and my family, and I won't be working as a Premier Field Engineer anymore as of next week.

But don't panic – I am staying at Microsoft!

I have actually never been closer to Microsoft than now: we are packing and moving to Seattle the coming weekend, and on July 18th I will start working as a Program Manager in the Operations Manager product team, in Redmond. I am hoping this will enable me to make a difference with even more customers.

Exciting times ahead – wish me luck!

 

That said – PFE is hiring! If you are interested in working for Microsoft – we have open positions (including my vacant position in Italy) for almost all the Microsoft technologies. Simply visit http://careers.microsoft.com and search on “PFE”.

As for the OpsMgr Health Check, don't you worry: it will continue being improved – I left it in the hands of some capable colleagues: Bruno Gabrielli, Stefan Stranger and Tim McFadden – and they have a plan and commitment to update it to OpsMgr 2012.

Improved ACS Partitions Query

Wednesday, May 4th, 2011

This has been sitting on my hard drive for a long time. Long story short, the report I posted at Permanent Link to Audit Collection Services Database Partitions Size Report had a couple of bugs:

  1. it did not consider the size of the dtString_XXX tables but only the size of dtEvent_XXX tables – this would still give you an idea of the trends, but it could lead to quite different SIZE calculations
  2. the query was failing on some instances that have been installed with the wrong (unsupported) Collation settings.

I fixed both bugs, but I don’t have a machine with SQL 2005 and Visual Studio 2005 anymore… so I can’t rebuild my report – but I don’t want to distribute one that only works on SQL 2008 because I know that SQL2005 is still out there. This is partially the reason that held this post back.

Without waiting so much longer, therefore, I decided I’ll just give you the fixed query. Enjoy Smile

--Query to get the Partition Table
--for each partition we launch the sp_spaceused stored procedure to determine the size and other info

--partition list
select PartitionId,Status,PartitionStartTime,PartitionCloseTime 
into #t1
from dbo.dtPartition with (nolock)
order by PartitionStartTime Desc 

--sp_spaceused holder table for dtEvent
create table #t2 (
    PartitionId nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    rows nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    reserved nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    data nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    index_size nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    unused nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS    
)

--sp_spaceused holder table for dtString
create table #t3 (
    PartitionId nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    rows nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    reserved nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    data nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    index_size nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS,
    unused nvarchar(MAX) Collate SQL_Latin1_General_CP1_CI_AS    
)

set nocount on

--vars used for building Partition GUID and main table name
declare @partGUID nvarchar(MAX)
declare @tblName nvarchar(MAX)
declare @tblNameComplete nvarchar(MAX)
declare @schema nvarchar(MAX)
DECLARE @vQuery NVARCHAR(MAX)

--cursor
declare c cursor for 
    select PartitionID from #t1
open c
fetch next from c into @partGUID

--start cursor usage
while @@FETCH_STATUS = 0
begin

--tblName - first usage for dtEvent
set @tblName = 'dtEvent_' + @partGUID

--retrieve the schema name
SET @vQuery = 'SELECT @dbschema = TABLE_SCHEMA from INFORMATION_SCHEMA.tables where TABLE_NAME = ''' + @tblName + ''''
EXEC sp_executesql @vQuery,N'@dbschema nvarchar(max) out, @dbtblName nvarchar(max)',@schema out, @tblname

--tblNameComplete
set @tblNameComplete = @schema + '.' + @tblName

INSERT #t2 
    EXEC sp_spaceused @tblNameComplete

--tblName - second usage for dtString
set @tblName = 'dtString_' + @partGUID

--retrieve the schema name
SET @vQuery = 'SELECT @dbschema = TABLE_SCHEMA from INFORMATION_SCHEMA.tables where TABLE_NAME = ''' + @tblName + ''''
EXEC sp_executesql @vQuery,N'@dbschema nvarchar(max) out, @dbtblName nvarchar(max)',@schema out, @tblname

--tblNameComplete
set @tblNameComplete = @schema + '.' + @tblName

INSERT #t3 
    EXEC sp_spaceused @tblNameComplete

fetch next from c into @partGUID
end
close c
deallocate c

--select * from #t2
--select * from #t3

--results
select #t1.PartitionId, 
    #t1.Status, 
    #t1.PartitionStartTime, 
    #t1.PartitionCloseTime, 
    #t2.rows,
    (CAST(LEFT(#t2.reserved,LEN(#t2.reserved)-3) AS NUMERIC(18,0)) + CAST(LEFT(#t2.reserved,LEN(#t2.reserved)-3) AS NUMERIC(18,0))) as 'reservedKB', 
    (CAST(LEFT(#t2.data,LEN(#t2.data)-3) AS NUMERIC(18,0)) + CAST(LEFT(#t3.data,LEN(#t3.data)-3) AS NUMERIC(18,0)))as 'dataKB', 
    (CAST(LEFT(#t2.index_size,LEN(#t2.index_size)-3) AS NUMERIC(18,0)) + CAST(LEFT(#t3.index_size,LEN(#t3.index_size)-3) AS NUMERIC(18,0))) as 'indexKB', 
    (CAST(LEFT(#t2.unused,LEN(#t2.unused)-3) AS NUMERIC(18,0)) + CAST(LEFT(#t3.unused,LEN(#t3.unused)-3) AS NUMERIC(18,0))) as 'unusedKB'
from #t1
join #t2
on #t2.PartitionId = ('dtEvent_' + #t1.PartitionId)
join #t3
on #t3.PartitionId = ('dtString_' + #t1.PartitionId)
order by PartitionStartTime desc

--cleanup
drop table #t1
drop table #t2
drop table #t3

OpsMgr Agents and Gateways Failover Queries

Thursday, December 23rd, 2010

The following article by Jimmy Harper explains very well how to set up agents and gateways’ failover paths thru Powershell http://blogs.technet.com/b/jimmyharper/archive/2010/07/23/powershell-commands-to-configure-gateway-server-agent-failover.aspx . This is the approach I also recommend, and that article is great – I encourage you to check it out if you haven’t done it yet!

Anyhow, when checking for the actual failover paths that have been configured, the use of Powershell suggested by Jimmy is rather slow – especially if your agent count is high. In the Operations Manager Health Check tool I was also using that technique at the beginning, but eventually moved to the use of SQL queries just for performance reasons. Since then, we have been using these SQL queries quite successfully for about 3 years now.

But this the season of giving… and I guess SQL Queries can be a gift, right? Therefore I am now donating them as Christmas Gift to the OpsMrg community Smile

Enjoy – and Merry Christmas!

 

--GetAgentForWhichServerIsPrimary
SELECT SourceBME.DisplayName as Agent,TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceCommunication() 
AND SourceBME.DisplayName not in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND SourceBME.DisplayName not in (select DisplayName from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.ManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'


--GetAgentForWhichServerIsFailover
SELECT SourceBME.DisplayName as Agent,TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceSecondaryCommunication() 
AND SourceBME.DisplayName not in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND SourceBME.DisplayName not in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.ManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'


--GetGatewayForWhichServerIsPrimary
SELECT SourceBME.DisplayName as Gateway, TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceCommunication() 
AND SourceBME.DisplayName in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'
    

--GetGatewayForWhichServerIsFailover
SELECT SourceBME.DisplayName As Gateway, TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceSecondaryCommunication() 
AND SourceBME.DisplayName in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'


--xplat agents
select bme2.DisplayName as XPlatAgent, bme.DisplayName as Server
from dbo.Relationship r with (nolock) 
join dbo.RelationshipType rt with (nolock) 
on r.RelationshipTypeId = rt.RelationshipTypeId 
join dbo.BasemanagedEntity bme with (nolock) 
on bme.basemanagedentityid = r.SourceEntityId 
join dbo.BasemanagedEntity bme2 with (nolock) 
on r.TargetEntityId = bme2.BaseManagedEntityId 
where rt.RelationshipTypeName = 'Microsoft.SystemCenter.HealthServiceManagesEntity' 
and bme.IsDeleted = 0 
and r.IsDeleted = 0 
and bme2.basemanagedtypeid in (SELECT DerivedTypeId 
FROM DerivedManagedTypes with (nolock) 
WHERE BaseTypeId = (select managedtypeid 
from managedtype where typename = 'Microsoft.Unix.Computer') 
and DerivedIsAbstract = 0)

Got Orphaned OpsMgr Objects?

Friday, December 17th, 2010

Have you ever wondered what would happen if, in Operations Manager, you’d delete a Management Server or Gateway that managed objects (such as network devices) or has agents pointing uniquely to it as their primary server?

The answer is simple, but not very pleasant: you get ORPHANED objects, which will linger in the database but you won’t be able to “see” or re-assign anymore from the GUI.

So the first thing I want to share is a query to determine IF you have any of those orphaned agents. Or even if you know, since you are not able to "see" them from the console, you might have to dig their name out of the database. Here's a query I got from a colleague in our reactive support team:


-- Check for orphaned health services (e.g. agent).
declare @DiscoverySourceId uniqueidentifier;
SET @DiscoverySourceId = dbo.fn_DiscoverySourceId_User();
SELECT TME.[TypedManagedEntityid], HS.PrincipalName
FROM MTV_HealthService HS
INNER JOIN dbo.[BaseManagedEntity] BHS WITH(nolock)
ON BHS.[BaseManagedEntityId] = HS.[BaseManagedEntityId]
-- get host managed computer instances
INNER JOIN dbo.[TypedManagedEntity] TME WITH(nolock)
ON TME.[BaseManagedEntityId] = BHS.[TopLevelHostEntityId]
AND TME.[IsDeleted] = 0
INNER JOIN dbo.[DerivedManagedTypes] DMT WITH(nolock)
ON DMT.[DerivedTypeId] = TME.[ManagedTypeId]
INNER JOIN dbo.[ManagedType] BT WITH(nolock)
ON DMT.[BaseTypeId] = BT.[ManagedTypeId]
AND BT.[TypeName] = N'Microsoft.Windows.Computer'
-- only with missing primary
LEFT OUTER JOIN dbo.Relationship HSC WITH(nolock)
ON HSC.[SourceEntityId] = HS.[BaseManagedEntityId]
AND HSC.[RelationshipTypeId] = dbo.fn_RelationshipTypeId_HealthServiceCommunication()
AND HSC.[IsDeleted] = 0
INNER JOIN DiscoverySourceToTypedManagedEntity DSTME WITH(nolock)
ON DSTME.[TypedManagedEntityId] = TME.[TypedManagedEntityId]
AND DSTME.[DiscoverySourceId] = @DiscoverySourceId
WHERE HS.[IsAgent] = 1
AND HSC.[RelationshipId] IS NULL;

Once you have identified the agent you need to re-assign to a new management server, this is doable from the SDK. Below is a powershell script I wrote which will re-assign it to the RMS. It has to run from within the OpsMgr Command Shell.
You still need to change the logic which chooses which agent – this is meant as a starting base… you could easily expand it into accepting parameters and/or consuming an input text file, or using a different Management Server than the RMS… you get the point.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$_.name –like "*Microsoft.SystemCenter.HealthServiceCommunication*"}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“HealthService”)
  6. $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  7. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  8. {
  9.     #the next line should be changed to pick the right agent to re-assign
  10.     if ($obj.DisplayName -match 'dsxlab')
  11.     {
  12.                 Write-host $obj.displayname
  13.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  14.                 $cmro.SetSource($obj)
  15.                 $cmro.SetTarget($rms)
  16.                 $imdd.Add($cmro)
  17.                 $imdd.Commit($mc)
  18.     }
  19. }

Similarly, you might get orphaned network devices. The script below is used to re-assign all Network Devices to the RMS. This script is actually something I have had even before the other one (yes, it has been sitting in my "digital drawer" for a couple of years or more…) and uses the same concept – only you might notice that the relation's source and target are "reversed", since the relationships are different:

  • the Management Server (source) "manages" the Network Device (target)
  • the Agent (source) "talks" to the Management Server (target)

With a bit of added logic it should be easy to have it work for specific devices.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$_.name –like "*Microsoft.SystemCenter.HealthServiceShouldManageEntity*"}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“NetworkDevice”)
  6. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  7. {
  8.                 Write-host $obj.displayname
  9.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  10.                 $cmro.SetSource($rms)
  11.                 $cmro.SetTarget($obj)
  12.                 $imdd.Add($cmro)
  13.                 $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  14.                 $imdd.Commit($mc)
  15. }

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Does anyone have a new System Center sticker for me?

Saturday, November 27th, 2010

Does anyone have a new System Center sticker?

I got this sticker last APRIL at MMS2010 in JUST ONE COPY, and I waited till I got a NEW laptop in SEPTEMBER to actually use that…
It also took a while to stick it on properly (other than to re-install the PC as I wanted…),  but this week they told me that, for an error, I got given the wrong machine (they did it all themselves, tho – I did not ask for any specific one) and this one needs to be replaced!!!!

This is WORSE than any hardware FAILure, as the machine just works very well and I was expecting to keep it for the next two years :-(

Can anyone be so nice to send me one of those awesome stickers again? :-)

Microsoft.Linux.RHEL.5.LogicalDisk.DiskBytesPerSecond Type Mismatch

Friday, October 22nd, 2010

I have had the following in my notes for a while… and I have not blogged in a while (been too busy) so I decided to blog it today, before the topic gets too old and starts stinking Smile

 

It all started when a customer showed me an Alert he was seeing in his environment from some XPlat workflow. The alert looks like the following:

Generic Performance Mapper Module Failed Execution
Alert Description Source: RLWSCOM02.domain.dom
Module was unable to convert parameter to a double value
Original parameter: '$Data///*[local-name()="BytesPerSecond"]$'
Parameter after $Data replacement: "
Error: 0x80020005
Details: Type mismatch.
One or more workflows were affected by this.
Workflow name: Microsoft.Linux.RHEL.5.LogicalDisk.DiskBytesPerSecond.Collection
Instance name: /
Instance ID: {4F6FA8F5-C56F-4C9B-ED36-12DAFF4073D1}
Management group: DataCenter
Path: RLWSCOM02.domain.dom\RLWSCOM02.domain.dom Alert Rule: Generic Performance Mapper Module Runtime Failure Created: 6/28/2010 11:30:28 PM

 

First I stumbled into this forum post which mentions he same symptom http://social.technet.microsoft.com/Forums/en-US/crossplatformgeneral/thread/62e0bf3e-be6f-4218-a37b-f1e66f02aa49 – but when looking at the resolution, the locale on the customer machine was good (== set to US settings), so I concluded that it was not the same root cause.

 

Then I looked at what that rule was supposed to do, and queried the same CIM class both remotely thru WS-Man and locally via CIM, and concluded that my issue was that certain values were returning as NULL while we were expecting to see a number on the Management Server – therefore the Type Mismatch!

I have explained previously how to run CIM queries against the XPlat agent; in this case it was the following one:

winrm enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_FileSystemStatisticalInformation?__cimnamespace=root/scx -username:scomuser -password:password -r:https://rllspago01.domain.dom:1270/wsman -auth:basic –skipCACheck -skipCNCheck

 

SCX_FileSystemStatisticalInformation

AverageDiskQueueLength = null

AverageTransferTime = null

BytesPerSecond = null

Caption = File system information

Description = Performance statistics related to a logical unit of secondary storage

ElementName = null

FreeMegabytes = 4007

IsAggregate = false

IsOnline = true

Name = /

PercentBusyTime = null

PercentFreeSpace = 55

PercentIdleTime = null

PercentUsedSpace = 45

ReadBytesPerSecond = null

ReadsPerSecond = null

TransfersPerSecond = null

UsedMegabytes = 3278

WriteBytesPerSecond = null

WritesPerSecond = null

 

See the NULLs ? Those are our issue.

Now, before you continue reading, I will tell you that I have investigated this also internally, and apparently we have just (in Cumulative Update 3) changed this behaviour in our XPlat modules, so that when NULL is returned, we consider it to be ZERO. Good or bad that is, it will at least take care of the error. But if you don’t get any data from the Unix system… well, you are not getting any data – so that might cause a surprise later on when you go and look at those charts and expect to see your disk “performance counters” but in fact all you have is a bunch of ZERO’s (how very interesting!). So, basically, the fix in CU3 suppresses the symptom, but does not address the cause.

So, let’s see what is actually causing this, as you might well want to get those statistics, or probably you would not be monitoring that server!

I looked at the Cimd.log (set to verbose) only says the following (basically not much: is getting info for 3 partitions… and the provider code is working)

2010-09-01T08:38:32,796Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances()

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] Object Path = //rllspago01.domain.dom/root/scx:SCX_FileSystemStatisticalInformation

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – Calling DoEnumInstances()

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] DiskProvider DoEnumInstances

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] DiskProvider GetDiskEnumeration – type 3

2010-09-01T08:38:33,360Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – DoEnumInstances() returned – 3

2010-09-01T08:38:33,360Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – Call ReturnDone

2010-09-01T08:38:33,360Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – return OK

2010-09-01T08:38:33,360Z Trace      [scx.core.provsup.cmpibase.singleprovider.DiskProvider:5964:3086830480] SingleProvider::EnumInstances() – Returning – 0

 

but it still did not give me an idea as to why we would not get data for those “counters”. A this point I stopped using complex troubleshooting techniques and simply turned intuition on, and tried with some help from a search engine: http://www.bing.com/search?q=How+do+I+find+out+Linux+Disk+utilization 

the results I got all mentioned that on Linux you would use the “iostat” command.

So I tried to use and… lol and behold: the iostat commend was NOT INSTALLED on that machine!

Guess what? We installed it (it is included in the “sysstat” package for RedHat linux, so a simple “yum install sysstat” took care of this) and the counters started working!

Hope that is useful to some.

OpsMgr Event IDs Spreadsheet

Tuesday, June 22nd, 2010

I work in support (mostly with System Center Operations Manager, as you know), and I work with event logs every day. The following are typical situations:

  1. I get a colleague or a customer telling me “I am having a problem and the SCOM agent is showing 21037 events and 20002 events.  What’s wrong with it?”   
  2. I want to tune an OpsMgr environment and reduce load on the database by turning off a few event collections, as my friend Kevin Holman suggests here http://blogs.technet.com/kevinholman/archive/2009/11/25/tuning-tip-turning-off-some-over-collection-of-events.aspx .
  3. I am analyzing, sorting and grouping Events with Powershell like I have written on my blog lately http://www.muscetta.com/2009/12/16/opsmgr-eventlog-analysis-with-powershell/ but I can’t read those long descriptions properly.
  4. I exported an EVT from a customer environment and I load it on a machine that does not have OpsMgr message DLLs installed – all I see are EventIDs and type (Warning, Error) – but no real description – and I still want to figure out what those events are trying to tell me.

Getting to the point: I, like everyone – don’t have every OpsMgr event memorized.

This is why I thought of building this spreadsheet, and I hope it might come in handy to more people.

The spreadsheet contains an “AllEvents” list – and then the same events are broken down by event source as well:

clip_image002

When you want to search for an events (in one of the situations described above) just open up the spreadsheet, go to the “AllEvents” tab, hit CTRL+F (“Find”) and type in the Event ID you are searching for:

clip_image004

And this will take you to the row containing the event, so you can look up its description:

clip_image006

The description shows the event standard text (which is in the message DLL, therefore is the part you will not see if opening an EVT on another machine that does not have OpsMgr installed), and where the event parameters are (%1, %2, etc – which will be the strings you see in the EVT anyway).

That way you can get an understanding of what the original message would have looked like on the original machine.

This is just one possible usage pattern of this reference. It can also be useful to just read/study the events, learning about new ones you have never encountered, or remembering those you HAVE seen in the past but did not quite remember. And of course you can also find other creative ways to use it.

You can get it from here.

 

A few last words to give due credit: this spreadsheet has been compiled by using Eventlog Explorer (http://blogs.technet.com/momteam/archive/2008/04/02/eventlog-explorer.aspx ) to extract the event information out of the message DLLs on a OpsMgr2007 R2 installation. That info has been then copied and pasted in Excel in order to have an “offline” reference. Also I would like to thank Kevin Holman for pointing me to Eventlog Explorer first, and then for insisting I should not keep this spreadsheet in my drawer, as it could be useful to more people!

How to convert (and fixup) the RedHat RPM to run on Debian/Ubuntu

Monday, June 21st, 2010

In an earlier post I had shown how I got the Xplat agent running on Ubuntu. I perfected the technique over time, and what follows is a step-by-step process on how to convert and change the RedHat package to run on Debian/Ubuntu. Of course this is still a hack… but some people asked me to detail it a bit more. At the same time, the cross platform team is working to update the the source code on codeplex with extra bits that will make more straightforward to grab it, modify it and re-compile it than it is today. Until then, here is how I got it to work.

I assume you have already copied the right .RPM package off the OpsMgr server’s /AgentManagement directory to the Linux box here. The examples below refer to the 32bit package, but of course the same identical technique would work for the 64bit version.

We start by converting the RPM package to DEB format:

root# alien -k scx-1.0.4-258.rhel.5.x86.rpm –scripts

scx_1.0.4-258_i386.deb generated

 

Then we need to create a folder where we will extract the content of the package, modify stuff, and repackage it:

root# mkdir scx_1.0.4-258_i386

root# cd scx_1.0.4-258_i386

root# ar -x ../scx_1.0.4-258_i386.deb

root# mkdir debian

root# cd debian

root# mkdir DEBIAN

root# cd DEBIAN

root# cd ../..

root# rm debian-binary

root# mv control.tar.gz debian/DEBIAN/

root# mv data.tar.gz debian/

root# cd debian

root# tar -xvzf data.tar.gz

root# rm data.tar.gz

root# cd DEBIAN/

root# tar -xvzf control.tar.gz

root# rm control.tar.gz

Now we have the “skeleton” of the package easily laid out on the filesystem and we are ready to modify the package and add/change stuff to and in it.

 

First, we need to add some stuff to it, which is expected to be found on a redhat distro, but is not present in debian. In particular:

1. You should copy the file “functions” (that you can get from a redhat/centos box under /etc/init.d) under the debian/etc/init.d folder in our package folder. This file is required/included by our startup scripts, so it needs to be deployed too.

Then we need to chang some of the packacge behavior by editing files under debian/DEBIAN:

2. edit the “control” file (a file describing what the package is, and does):

clip_image002

3. edit the “preinst” file (pre-installation instructions): we need to add instructions to copy the “issue” file onto “redhat-release” (as the SCX_OperatingSystem class will look into that file, and this is hard-coded in the binary, we need to let it find it):

clip_image004

these are the actual command lines to add for both packages (DEBIAN or UBUNTU):

# symbolic links for libaries called differently on Ubuntu and Debian vs. RedHat

ln -s /usr/lib/libcrypto.so.0.9.8 /usr/lib/libcrypto.so.6

ln -s /usr/lib/libssl.so.0.9.8 /usr/lib/libssl.so.6

the following bit would be Ubuntu-specific:

#we need this file for the OS provider relies on it, so we convert what we have in /etc/issue

#this is ok for Ubuntu (“Ubuntu 9.0.4 \n \l” becomes “Ubuntu 9.0.4”)

cat /etc/issue | awk '/\\n/ {print $1, $2}' > /etc/redhat-release

while the following bit is Debian-specific:

#this is ok for Debian (“Debian GNU/Linux 5.0 \n \l” becomes “Debian GNU/Linux 5.0”)

cat /etc/issue | awk '/\\n/ {print $1, $2, $3}' > /etc/redhat-release

 

4. Then we edit/modify the “postinst” file (post-installation instructions) as follows:

a. remove the 2nd and 3rd lines which look like the following

RPM_INSTALL_PREFIX=

export RPM_INSTALL_PREFIX

as they are only useful for the RPM system, not DEB/APT, so we don’t need them.

b. change the following 2 functions which contain RedHat-specific commands:

configure_pegasus_service() {

           /usr/lib/lsb/install_initd /etc/init.d/scx-cimd

}

start_pegasus_service() {

           service scx-cimd start

}

c. We need to change in the Debian equivalents for registering a service in INIT and starting it:

configure_pegasus_service() {

               update-rc.d scx-cimd defaults

}

start_pegasus_service() {

              /etc/init.d/scx-cimd start

}

5. Modify the “prerm” file (pre-removal instructions):

a. Just like “postinst”, remove the lines

RPM_INSTALL_PREFIX=

export RPM_INSTALL_PREFIX

b. Locate the two functions stopping and un-installing the service

stop_pegasus_service() {

         service scx-cimd stop

}

unregister_pegasus_service() {

          /usr/lib/lsb/remove_initd /etc/init.d/scx-cimd

}

c. Change those two functions with the Debian-equivalent command lines

stop_pegasus_service() {

           /etc/init.d/scx-cimd stop

}

unregister_pegasus_service() {

           update-rc.d -f scx-cimd remove

}

At this point the change we needed have been put in place, and we can re-build the DEB package.

Move yourself in the main folder of the application (the scx_1.0.4-258_i386 folder):

root# cd ../..

Create the package starting from the folders

root# dpkg-deb –build debian

dpkg-deb: building package `scx' in `debian.deb'.

Rename the package (for Ubuntu)

root# mv debian.deb scx_1.0.4-258_Ubuntu_9_i386.deb

Rename the package (for Debian)

root# mv debian.deb scx_1.0.4-258_Debian_5_i386.deb

Install it

root# dpkg -i scx_1.0.4-258_Platform_Version_i386.deb

All done! It should install and work!

 

Next step would be creating a Management Pack to monitor Debian and Ubuntu. It is pretty similar to what Robert Hearn has described step by step for CentOS, but with some different replacements of strings, as you can imagine. I have done this but have not written down the procedure yet, so I will post another article on how to do this as soon as I manage to get it standardized and reliable. There is a bit more work involved for Ubuntu/Debian… as some of the daemons/services have different names, and certain files too… but nothing terribly difficult to change so you might want to try it already and have a go at it!

In the meantime, as a teaser, here’s my server’s (http://www.muscetta.com) performance, being monitored with this “hack”:

image

 

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Audit Collection Services Database Partitions Size Report

Wednesday, May 5th, 2010

A number of people I have talked to liked my previous post on ACS sizing. One thing that was not extremely easy or clear to them in that post was *how* exactly I did one thing I wrote:

[…] use the dtEvent_GUID table to get the number of events for that day, and use the stored procedure “sp_spaceused”  against that same table to get an overall idea of how much space that day is taking in the database […]

To be completely honest, I do not expect people to do this manually a hundred times if they have a hundred partitions. In fact, I have been doing this for a while with a script which will do the looping for me and run that sp_spaceused for me a number of time. I cannot share that script, but I do realize that this automation is very useful, therefore I wrote a “stand-alone” SQL query which, using a couple of temporary tables, produces a similar type of output. I also went a step further and packaged it into a SQL Server Reporting Services Report for everyone’s consumption. The report should look like the following screenshot, featuring a chart and the table with the numerical information about each and every partition in the database:

ACS Partitions Report

You can download the report from here.

You need to upload it to your report server, and change the data source to the shared Data Source that also the built-in ACS Reports use, and it should work.

[NOTE/UPDATE May 4th 2011: This report has a few bugs. I have posted the updated query on http://www.muscetta.com/2011/05/04/improved-acs-partitions-query/ . I am sorry I can't provide a ready made report with the fix right now. Make sure you understand this and don't implement it without testing.]

Enjoy!

A few thoughts on sizing Audit Collection System

Thursday, March 18th, 2010

People were already collecting logs with MOM, so why not the security log? Some people were doing that, but it did not scale enough; for this reason, a few years ago Eric Fitzgerald announced that he was working on Microsoft Audit Collection System. Anyhow, the tool as it was had no interface… and the rest is history: it has been integrated into System Center Operations Manager. Anyhow, ACS remains a lesser-known component of OpsMgr.

There are a number of resources on the web that is worth mentioning and linking to:

and, of course, many more, I cannot link them all.

As for myself, I have been playing with ACS since those early beta days (before I joined Microsoft and before going back to MOM, when I was working in Security), but I never really blogged about this piece.

Since I have been doing quite a lot of work around ACS lately, again, I thought it might be worth consolidating some thoughts about it, hence this post.

Anatomy of an “Online” Sizing Calculation

What I would like to explain here is the strategy and process I go thru when analyzing the data stored in a ACS database, in order to determine a filtering strategy: what to keep and what not to keep, by applying a filter on the ACS Collector.

So, the first thing I usually start with is using one of the many “ACS sizer” Excel spreadsheets around… which usually tell you that you need more space than it really is necessary… basically giving you a “worst case” scenario. I don’t know how some people can actually do this from a purely theoretical point of view, but I usually prefer a bottom up approach: I look at the actual data that the ACS is collecting without filters, and start from there for a better/more accurate sizing.

In the case of a new install this is easy – you just turn ACS on, set the retention to a few days (one or two weeks maximum), give the DB plenty of space to make sure it will make it, add all your forwarders… sit back and wait.

Then you come back 2 weeks later and start looking at the data that has been collected.

What/How much data are we collecting?

First of all, if we have not changed the default settings, the grooming and partitioning algorithm will create new partitioned tables every day. So my first step is to see how big each “partition” is.

But… what is a partition, anyway? A partition is a set of 4 tables joint together:

  1. dtEvent_GUID
  2. dtEventData_GUID
  3. dtPrincipal_GUID
  4. dtSTrings_GUID

where GUID is a new GUID every day, and of course the 4 tables that make up a daily partition will have the same GUID.

The dtPartition table contains a list of all partitions and their GUIDs, together with their start and closing time.

Just to get a rough estimate we can ignore the space used by the last three tables – which are usually very small – and only use the dtEvent_GUID table to get the number of events for that day, and use the stored procedure “sp_spaceused”  against that same table to get an overall idea of how much space that day is taking in the database.

By following this process, I come up with something like the following:

Partition ID Status Partition Start Time Partition Close Time Rows Reserved  KB Total GB
9b45a567_c848_4a32_9c35_39b402ea0ee2 0 2/1/2010 2:00 2/1/2010 2:00 29,749,366 7,663,488 7,484
8d8c8ee1_4c5c_4dea_b6df_82233c52e346 2 1/31/2010 2:00 2/1/2010 2:00 28,067,438 9,076,904 8,864
34ce995b_689b_46ae_b9d3_c644cfb66e01 2 1/30/2010 2:00 1/31/2010 2:00 30,485,110 9,857,896 9,627
bb7ea5d3_f751_473a_a835_1d1d42683039 2 1/29/2010 2:00 1/30/2010 2:00 48,464,952 15,670,792 15,304
ee262692_beae_4d81_8079_470a54567946 2 1/28/2010 2:00 1/29/2010 2:00 48,980,178 15,836,416 15,465
7984b5b8_ddea_4e9c_9e51_0ee7a413b4c9 2 1/27/2010 2:00 1/28/2010 2:00 51,295,777 16,585,408 16,197
d93b9f0e_2ec3_4f61_b5e0_b600bbe173d2 2 1/26/2010 2:00 1/27/2010 2:00 53,385,239 17,262,232 16,858
8ce1b69a_7839_4a05_8785_29fd6bfeda5f 2 1/25/2010 2:00 1/26/2010 2:00 55,997,546 18,105,840 17,681
19aeb336_252d_4099_9a55_81895bfe5860 2 1/24/2010 2:00 1/24/2010 2:00 28,525,304 7,345,120 7,173
1cf70e01_3465_44dc_9d5c_4f3700dc408a 2 1/23/2010 2:00 1/23/2010 2:00 26,046,092 6,673,472 6,517
f5ec207f_158c_47a8_b15f_8aab177a6305 2 1/22/2010 2:00 1/22/2010 2:00 47,818,322 12,302,208 12,014
b48dabe6_a483_4c60_bb4d_93b7d3549b3e 2 1/21/2010 2:00 1/21/2010 2:00 55,060,150 14,155,392 13,824
efe66c10_0cf2_4327_adbf_bebb97551c93 2 1/20/2010 2:00 1/20/2010 2:00 58,322,217 15,029,216 14,677
0231463e_8d50_4a42_a834_baf55e6b4dcd 2 1/19/2010 2:00 1/19/2010 2:00 61,257,393 15,741,248 15,372
510acc08_dc59_482e_a353_bfae1f85e648 2 1/18/2010 2:00 1/18/2010 2:00 64,579,122 16,612,512 16,223

If you have just installed ACS and let it run without filters with your agents for a couple of weeks, you should get some numbers like those above for your “couple of weeks” of analysis. If you graph your numbers in Excel (both size and number of rows/events per day) you should get some similar lines that show a pattern or trend:

Trend: Space user by day

Trend: Number of events by day

So, in my example above, we can clearly observe a “weekly” pattern (monday-to-friday being busier than the weekend) and we can see that – for that environment – the biggest partition is roughly 17GB. If we round this up to 20GB – and also considering the weekends are much quieter – we can forecast 20*7 = 140GB per week. This has an excess “buffer” which will let the system survive event storms, should they happen. We also always recommend having some free space to allow for re-indexing operations.

In fact, especially when collecting everything without filters, the daily size is a lot less predictable: imagine worms “trying out” administrator account’s passwords, and so on… those things can easily create event storms.

Anyway, in the example above, the customer would have liked to keep 6 MONTHS (180days) of data online, which would become 20*180 = 3600GB = THREE TERABYTE and a HALF! Therefore we need a filtering strategy – and badly – to reduce this size.

[edited on May 7th 2010 – if you want to automate the above analysis and produce a table and graphs like those just shown, you should look at my following post.]

Filtering Strategies

Ok, then we need to look at WHAT actually comprises that amount of events we are collecting without filters. As I wrote above, I usually run queries to get this type of information.

I will not get into HOW TO write a filter here – a collector’s filter is a WMI notification query and it is already described pretty well elsewhere how to configure it.

Here, instead, I want to walk thru the process and the queries I use to understand where the noise comes from and what could be filtered – and get an estimate of how much space we could be saving if filter one way or another.

Number of Events per User

–event count by User (with Percentages)
declare @total float
select @total = count(HeaderUser) from AdtServer.dvHeader
select count(HeaderUser),HeaderUser, cast(convert(float,(count(HeaderUser)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by HeaderUser
order by count(HeaderUser) desc

In our example above, over the 14 days we were observing, we obtained percentages like the following ones:

#evt HeaderUser Account Percent
204,904,332 SYSTEM 40.79 %
18,811,139 LOCAL SERVICE 3.74 %
14,883,946 ANONYMOUS LOGON 2.96 %
10,536,317 appintrauser 2.09 %
5,590,434 mossfarmusr

Just by looking at this, it is pretty clear that filtering out events tracked by the accounts “SYSTEM”, “LOCAL SERVICE” and “ANONYMOUS”, we would save over 45% of the disk space!

Number of Events by EventID

Similarly, we can look at how different Event IDs have different weights on the total amount of events tracked in the database:

–event count by ID (with Percentages)
declare @total float
select @total = count(EventId) from AdtServer.dvHeader
select count(EventId),EventId, cast(convert(float,(count(EventId)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by EventId
order by count(EventId) desc

We would get some similar information here:

Event ID Meaning Sum of events Percent
538 A user logged off 99,494,648 27.63
540 Successful Network Logon 97,819,640 27.16
672 Authentication Ticket Request 52,281,129 14.52
680 Account Used for Logon by (Windows 2000) 35,141,235 9.76
576 Specified privileges were added to a user's access token. 26,154,761 7.26
8086 Custom Application ID 18,789,599 5.21
673 Service Ticket Request 10,641,090 2.95
675 Pre-Authentication Failed 7,890,823 2.19
552 Logon attempt using explicit credentials 4,143,741 1.15
539 Logon Failure – Account locked out 2,383,809 0.66
528 Successful Logon 1,764,697 0.49

Also, do not forget that ACS provides some report to do this type of analysis out of the box, even if for my experience they are generally slower – on large datasets – than the queries provided here. Also, a number of reports have been buggy over time, so I just prefer to run queries and be on the safe side.

Below an example of such report (even if run against a different environment – just in case you were wondering why the numbers were not the same ones :-)):Event Counts ACS Default Report

The numbers and percentages we got from the two queries above should already point us in the right direction about what we might want to adjust in either our auditing policy directly on Windows and/or decide if there is something we want to filter out at the collector level (here you should ask yourself the question: “if they aren’t worth collecting are they worth generating?” – but I digress).

Also, a permutation of the above two queries should let you see which user is generating the most “noise” in regards to some events and not other ones… for example:

–event distribution for a specific user (change the @user) – with percentages for the user and compared with the total #events in the DB
declare @user varchar(255)
set @user = 'SYSTEM'
declare @total float
select @total = count(Id) from AdtServer.dvHeader
declare @totalforuser float
select @totalforuser = count(Id) from AdtServer.dvHeader where HeaderUser = @user
select count(Id), EventID, cast(convert(float,(count(Id)) / convert(float,@totalforuser) * 100) as decimal(10,2)) as PercentageForUser, cast(convert(float,(count(Id)) / (convert(float,@total)) * 100) as decimal(10,2)) as PercentageTotal
from AdtServer.dvHeader
where HeaderUser = @user
group by EventID
order by count(Id) desc

The above is particularly important, as we might want to filter out a number of events for the SYSTEM account (i.e. logons that occur when starting and stopping services) but we might want to keep other events that are tracked by the SYSTEM account too, such as an administrator having wiped the Security Log clean – which might be something you want to keep:

Event ID 517 Audit Log was cleared

of course the amount of EventIDs 517 over the total of events tracked by the SYSTEM account will not be as many, and we can still filter the other ones out.

Number of Events by EventID and by User

We could also combine the two approaches above – by EventID and by User:

select count(Id),HeaderUser, EventId

from AdtServer.dvHeader

group by HeaderUser, EventId

order by count(Id) desc

This will produce a table like the following one

SQL Query: Events by EventID and by User

which can be easily copied/pasted into Excel in order to produce a pivot Table:

Pivot Table

Cluster EventLog Replication

One more aspect that is less widely known, but I think is worth showing, is the way that clusters behave when in ACS. I don’t mean all clusters… but if you keep the “eventlog replication” feature of clusters enabled (you should disable it also from a monitoring perspective, but I digress), each cluster node’s security eventlog will have events not just for itself, but for all other nodes as well.

Albeit I have not found a reliable way to filter out – other than disabling eventlog replication altogether.

Anyway, just to get an idea of how much this type of “duplicate” events weights on the total, I use the following query, that tells you how many events for each machine are tracked by another machine:

–to spot machines that are cluster nodes with eventlog repliation and write duplicate events (slow)

select Count(Id) as Total,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',") as ForwarderMachine, EventMachine

from AdtServer.dvHeader

–where ForwarderMachine <> EventMachine

group by EventMachine,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',")

order by ForwarderMachine,EventMachine

Cluster Events

Those presented above are just some of the approaches I usually look into at first. Of course there are a number more. Here I am including the same queries already shown in action, plus a few more that can be useful in this process.

I have even considered building a page with all these queries – a bit like those that Kevin is collecting for OpsMgr (we actually wrote some of them together when building the OpsMgr Health Check)… shall I move the below queries on such a page? I though I’d list them here and give some background on how I normally use them, to start off with.

Some more Useful Queries

–top event ids
select count(EventId), EventId
from AdtServer.dvHeader
group by EventId
order by count(EventId) desc

–event count by ID (with Percentages)
declare @total float
select @total = count(EventId) from AdtServer.dvHeader
select count(EventId),EventId, cast(convert(float,(count(EventId)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by EventId
order by count(EventId) desc

–which machines have ever written event 538
select distinct EventMachine, count(EventId) as total
from AdtServer.dvHeader
where EventID = 538
group by EventMachine

–machines
select * from dtMachine

–machines (more readable)
select replace(right(Description, (len(Description) – patindex('%\%',Description))),'$',")
from dtMachine

–events by machine
select count(EventMachine), EventMachine
from AdtServer.dvHeader
group by EventMachine

–rows where EventMachine field not available (typically events written by ACS itself for chekpointing)
select *
from AdtServer.dvHeader
where EventMachine = 'n/a'

–event count by day
select convert(varchar(20), CreationTime, 102) as Date, count(EventMachine) as total
from AdtServer.dvHeader
group by convert(varchar(20), CreationTime, 102)
order by convert(varchar(20), CreationTime, 102)

–event count by day and by machine
select convert(varchar(20), CreationTime, 102) as Date, EventMachine, count(EventMachine) as total
from AdtServer.dvHeader
group by EventMachine, convert(varchar(20), CreationTime, 102)
order by convert(varchar(20), CreationTime, 102)

–event count by machine and by date (distinuishes between AgentMachine and EventMachine
select convert(varchar(10),CreationTime,102),Count(Id),EventMachine,AgentMachine
from AdtServer.dvHeader
group by convert(varchar(10),CreationTime,102),EventMachine,AgentMachine
order by convert(varchar(10),CreationTime,102) desc ,EventMachine

–event count by User
select count(Id),HeaderUser
from AdtServer.dvHeader
group by HeaderUser
order by count(Id) desc

–event count by User (with Percentages)
declare @total float
select @total = count(HeaderUser) from AdtServer.dvHeader
select count(HeaderUser),HeaderUser, cast(convert(float,(count(HeaderUser)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by HeaderUser
order by count(HeaderUser) desc

–event distribution for a specific user (change the @user) – with percentages for the user and compared with the total #events in the DB
declare @user varchar(255)
set @user = 'SYSTEM'
declare @total float
select @total = count(Id) from AdtServer.dvHeader
declare @totalforuser float
select @totalforuser = count(Id) from AdtServer.dvHeader where HeaderUser = @user
select count(Id), EventID, cast(convert(float,(count(Id)) / convert(float,@totalforuser) * 100) as decimal(10,2)) as PercentageForUser, cast(convert(float,(count(Id)) / (convert(float,@total)) * 100) as decimal(10,2)) as PercentageTotal
from AdtServer.dvHeader
where HeaderUser = @user
group by EventID
order by count(Id) desc

–to spot machines that write duplicate events (such as cluster nodes with eventlog replication enabled)
select Count(Id),EventMachine,AgentMachine
from AdtServer.dvHeader
group by EventMachine,AgentMachine
order by EventMachine

–to spot machines that are cluster nodes with eventlog repliation and write duplicate events (better but slower)
select Count(Id) as Total,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',") as ForwarderMachine, EventMachine
from AdtServer.dvHeader
–where ForwarderMachine <> EventMachine
group by EventMachine,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',")
order by ForwarderMachine,EventMachine

–which user and from which machine is target of elevation (network service doing "runas" is a 552 event)
select count(Id),EventMachine, TargetUser
from AdtServer.dvHeader
where HeaderUser = 'NETWORK SERVICE'
and EventID = 552
group by EventMachine, TargetUser
order by count(Id) desc

–by hour, minute and user
–(change the timestamp)… this query is useful to search which users are active in a given time period…
–helpful to spot "peaks" of activities such as password brute force attacks, or other activities limited in time.
select datepart(hour,CreationTime) as Hours, datepart(minute,CreationTime) as Minutes, HeaderUser, count(Id) as total
from AdtServer.dvHeader
where CreationTime < '2010-02-22T16:00:00.000'
and CreationTime > '2010-02-22T15:00:00.000'
group by datepart(hour,CreationTime), datepart(minute,CreationTime),HeaderUser
order by datepart(hour,CreationTime), datepart(minute,CreationTime),HeaderUser

OpsMgr Eventlog analysis with Powershell

Wednesday, December 16th, 2009

The following technique should already be understood by any powersheller. Here we focus on Operations Manager log entries, even if the data mining technique shows is entirely possibly – and encouraged :-) – with any other event log.

Let’s start by getting our eventlog into a variable called $evt:

PS  >> $evt = Get-Eventlog “Operations Manager”

The above only works locally in POSH v1.

In POSH v2 you can go remotely by using the “-computername” parameter:

PS  >> $evt = Get-Eventlog “Operations Manager” –computername RMS.domain.com

Anyhow, you can get to this remotely also in POSHv1 with this other more “dotNET-tish” syntax:

PS >> $evt = (New-Object System.Diagnostics.Eventlog -ArgumentList "Operations Manager").get_Entries()

you could even export this (or any of the above) to a CLIXML file:

PS >> (New-Object System.Diagnostics.Eventlog -ArgumentList "Operations Manager").get_Entries() | export-clixml -path c:\evt\Evt-OpsMgr-RMS.MYDOMAIN.COM.xml

and then you could reload your eventlog to another machine:

PS  >> $evt = import-clixml c:\evt\Evt-OpsMgr-RMS.MYDOMAIN.COM.xml

whatever way you used to populate your $evt  variable, be it from a “live” eventlog or by re-importing it from XML, you can then start analyzing it:

PS  >> $evt | where {$_.Entrytype -match "Error"} | select EventId,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
1510 4509                      {@{EventID=4509; Source=HealthService; Message=The constructor for the managed module type "Microsoft.EnterpriseManagement.Mom.DatabaseQueryModules.GroupCalculatio.
   15 20022                     {@{EventID=20022; Source=OpsMgr Connector; Message=The health service {7B0E947B-2055…
    3 26319                     {@{EventID=26319; Source=OpsMgr SDK Service; Message=An exception was thrown while p…
    1 4512                      {@{EventID=4512; Source=HealthService; Message=Converting data batch to XML failed w…

the above is functionally identical to the following:

PS  >> $evt | where {$_.Entrytype -eq 1} | select EventID,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
1510 4509                      {@{EventID=4509; Source=HealthService; Message=The constructor for the managed modul…
   15 20022                     {@{EventID=20022; Source=OpsMgr Connector; Message=The health service {7B0E947B-2055…
    3 26319                     {@{EventID=26319; Source=OpsMgr SDK Service; Message=An exception was thrown while p…
    1 4512                      {@{EventID=4512; Source=HealthService; Message=Converting data batch to XML failed w…

Note that Eventlog Entries’ type is an ENUM that has values of 0,1,2 – similarly to OpsMgr health states – but beware that their order is not the same, as shown in the following table:

Code OpsMgr States Events EntryType
0 Not Monitored Information
1 Success Error
2 Warning Warning
3 Critical

Let’s now look at Information Events (Entrytype –eq 0)

PS  >> $evt | where {$_.Entrytype -eq 0} | select EventID,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
4135 2110                      {@{EventID=2110; Source=HealthService; Message=Health Service successfully transferr…
1548 21025                     {@{EventID=21025; Source=OpsMgr Connector; Message=OpsMgr has received new configura…
4644 7026                      {@{EventID=7026; Source=HealthService; Message=The Health Service successfully logge…
1548 7023                      {@{EventID=7023; Source=HealthService; Message=The Health Service has downloaded sec…
1548 7025                      {@{EventID=7025; Source=HealthService; Message=The Health Service has authorized all…
1548 7024                      {@{EventID=7024; Source=HealthService; Message=The Health Service successfully logge…
1548 7028                      {@{EventID=7028; Source=HealthService; Message=All RunAs accounts for management gro…
   16 20021                     {@{EventID=20021; Source=OpsMgr Connector; Message=The health service {7B0E947B-2055…
   13 7019                      {@{EventID=7019; Source=HealthService; Message=The Health Service has validated all …
    4 4002                      {@{EventID=4002; Source=Health Service Script; Message=Microsoft.Windows.Server.Logi…

 

And “Warning” events (Entrytype –eq 2):

PS  >> $evt | where {$_.Entrytype -eq 2} | select EventID,Source,Message | group eventid

Count Name                      Group
—– —-                      —–
1511 1103                      {@{EventID=1103; Source=HealthService; Message=Summary: 1 rule(s)/monitor(s) failed …
  501 20058                     {@{EventID=20058; Source=OpsMgr Connector; Message=The Root Connector has received b…
    5 29202                     {@{EventID=29202; Source=OpsMgr Config Service; Message=OpsMgr Config Service could …
  421 31501                     {@{EventID=31501; Source=Health Service Modules; Message=No primary recipients were …
   18 10103                     {@{EventID=10103; Source=Health Service Modules; Message=In PerfDataSource, could no…
    1 29105                     {@{EventID=29105; Source=OpsMgr Config Service; Message=The request for management p…

 

 

Ok now let’s see those event 20022, for example… so we get an idea of which healthservices they are referring to (20022 indicates" “hearthbeat failure”, btw):

PS  >> $evt | where {$_.eventid -eq 20022} | select message

Message
——-
The health service {7B0E947B-2055-C12A-B6DB-DD6B311ADF39} running on host webapp3.domain1.mydomain.com and s…
The health service {E3B3CCAA-E797-4F08-860F-47558B3DA477} running on host SERVER1.domain2.mydomain.com and serving…
The health service {E3B3CCAA-E797-4F08-860F-47558B3DA477} running on host SERVER1.domain2.mydomain.com and serving…
The health service {E3B3CCAA-E797-4F08-860F-47558B3DA477} running on host SERVER1.domain2.mydomain.com and serving…
The health service {52E16F9C-EB1A-9FAF-5B9C-1AA9C8BC28E3} running on host DC4WK3.domain1.mydomain.com and se…
The health service {F96CC9E6-2EC4-7E63-EE5A-FF9286031C50} running on host VWEBDL2.domain1.mydomain.com and s…
The health service {71987EE0-909A-8465-C32D-05F315C301CC} running on host VDEVWEBPROBE2.domain2.mydomain.com….
The health service {BAF6716E-54A7-DF68-ABCB-B1101EDB2506} running on host XP2SMS002.domain2.mydomain.com and serving mana…
The health service {30C81387-D5E0-32D6-C3A3-C649F1CF66F1} running on host stgweb3.domain3.mydomain.com and…
The health service {3DCDD330-BBBB-B8E8-4FED-EF163B27DE0A} running on host VWEBDL1.domain1.mydomain.com and s…
The health service {13A47552-2693-E774-4F87-87DF68B2F0C0} running on host DC2.domain4.mydomain.com and …
The health service {920BF9A8-C315-3064-A5AA-A92AA270529C} running on host FSCLU2 and serving management group Pr…
The health service {FAA3C2B5-C162-C742-786F-F3F8DC8CAC2F} running on host WEBAPP4.domain1.mydomain.com and s…
The health service {3DCDD330-BBBB-B8E8-4FED-EF163B27DE0A} running on host WEBDL1.domain1.mydomain.com and s…
The health service {3DCDD330-BBBB-B8E8-4FED-EF163B27DE0A} running on host WEBDL1.domain1.mydomain.com and s…

 

or let’s look at some warning for the Config Service:

PS  >> $evt | where {$_.Eventid -eq 29202}

   Index Time          EntryType   Source                 InstanceID Message
   —– —-          ———   ——                 ———- ——-
5535065 Dec 07 21:18  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5543960 Dec 09 16:39  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5545536 Dec 10 01:06  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5553119 Dec 11 08:24  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…
5555677 Dec 11 10:34  Warning     OpsMgr Config Ser…   2147512850 OpsMgr Config Service could not retrieve a cons…

Once seen those, can you remember of any particular load you had on those days that justifies the instance space changing so quickly that the Config Service couldn’t keep up?

 

Or let’s group those events with ID 21025 by hour, so we know how many Config recalculations we’ve had (which, if many, might indicate Config Churn):

PS  >> $evt | where {$_.Eventid -eq 21025} | select TimeGenerated | % {$_.TimeGenerated.ToShortDateString()} | group

Count Name                      Group
—– —-                      —–
   39 12/7/2009                 {12/7/2009, 12/7/2009, 12/7/2009, 12/7/2009…}
  203 12/8/2009                 {12/8/2009, 12/8/2009, 12/8/2009, 12/8/2009…}
  217 12/9/2009                 {12/9/2009, 12/9/2009, 12/9/2009, 12/9/2009…}
  278 12/10/2009                {12/10/2009, 12/10/2009, 12/10/2009, 12/10/2009…}
  259 12/11/2009                {12/11/2009, 12/11/2009, 12/11/2009, 12/11/2009…}
  224 12/12/2009                {12/12/2009, 12/12/2009, 12/12/2009, 12/12/2009…}
  237 12/13/2009                {12/13/2009, 12/13/2009, 12/13/2009, 12/13/2009…}
   91 12/14/2009                {12/14/2009, 12/14/2009, 12/14/2009, 12/14/2009…}

 

Event ID 21025 shows that there is a new configuration for the Management Group.

Event ID 29103 has a similar wording, but shows that there is a new configuration for a given Healthservice. These should normally be many more events, unless your only health Service is the RMS, which is unlikely…

If we look at the event description (“message”) in search for the name (or even the GUID, as both are present) or our RMS, as follows, then they should be the same numbers of the 21025 above:

PS  >> $evt | where {$_.Eventid -eq 29103} | where {$_.message -match "myrms.domain.com"} | select TimeGenerated | % {$_.TimeGenerated.ToShortDateString()} | group

Count Name                      Group
—– —-                      —–
   39 12/7/2009                 {12/7/2009, 12/7/2009, 12/7/2009, 12/7/2009…}
  203 12/8/2009                 {12/8/2009, 12/8/2009, 12/8/2009, 12/8/2009…}
  217 12/9/2009                 {12/9/2009, 12/9/2009, 12/9/2009, 12/9/2009…}
  278 12/10/2009                {12/10/2009, 12/10/2009, 12/10/2009, 12/10/2009…}
  259 12/11/2009                {12/11/2009, 12/11/2009, 12/11/2009, 12/11/2009…}
  224 12/12/2009                {12/12/2009, 12/12/2009, 12/12/2009, 12/12/2009…}
  237 12/13/2009                {12/13/2009, 12/13/2009, 12/13/2009, 12/13/2009…}
   91 12/14/2009                {12/14/2009, 12/14/2009, 12/14/2009, 12/14/2009…}

 

Going back to the initial counts of events by their IDs, when showing the errors the counts above had spotted the presence of a lonely 4512 event, which might have gone undetected if just browsing the eventlog with the GUI, since it only occurred once.

Let’s take a look at it:

PS  >> $evt | where {$_.eventid -eq 4512}

   Index Time          EntryType   Source                 InstanceID Message
   —– —-          ———   ——                 ———- ——-
5560756 Dec 12 11:18  Error       HealthService          3221229984 Converting data batch to XML failed with error …

Now, when it is about counts, Powershell is great.  But sometimes Powershell makes it difficult to actually READ the (long) event messages (descriptions) in the console. For example, our event ID 4512 is difficult to read in its entirety and gets truncated with trailing dots…

we can of course increase the window size and/or selecting only THAT one field to read it better:

PS  >> $evt | where {$_.eventid -eq 4512} | select message

Message
——-
Converting data batch to XML failed with error "Not enough storage is available to complete this operation." (0x8007000E) in rule "Microsoft.SystemCenter.ConfigurationService.CollectionRule.Event.ConfigurationChanged" running for instance "RMS.MYDOMAIN.COM" with id:"{04F4ADED-2C7F-92EF-D620-9AF9685F736F}" in management group "SCOMPROD"

Or, worst case, if it still does not fit, we can still go and search for it in the actual, usual eventlog application… but at least we will have spotted it!

 

The above wants to give you an idea of what is easily accomplished with some simple one-liners, and how it can be a useful aid in analyzing/digging into Eventlogs.

All of the above is ALSO be possible with Logparser, and it would actually be even less heavy on memory usage and it will be quicker, to be honest!

I just like Powershell syntax a lot more, and its ubiquity, which makes it a better option for me. Your mileage may vary, of course.

Invoking Methods on the Xplat agent with WINRM

Monday, October 26th, 2009

So I was testing other stuff tonight, to be honest, but I got pinged on Instant Messenger by my geek friend and colleague Stefan Stranger who pointed me at his request for help here http://friendfeed.com/sstranger/4571f39b/help-needed-on-winrs-or-winrm-and-openwsman-to

He wanted to use WINRM or any other command line utility to interact with the Xplat agent, and call methods on the Unix machine from windows. This could be very useful to – for example – restart a service (in fact it is what the RECOVERY actions in the Xplat Management Packs do, btw).

At first I told him I had only tested enumerations – such as on this other post http://www.muscetta.com/2009/06/01/using-the-scx-agent-with-wsman-from-powershell-v2/ … but the question intrigued me, so I check out the help for winrm’s INVOKE verb:

clip_image002

Which told me that you can pass in the parameters for the method to be called/invoked either as an hashtable @{KEY=”value”;KEY2=”value”}, or as an input XML file. I first tried the XML file but I could not get its format right.

After a few more minutes of trying, I figured out the right syntax.

This one works, for example:

winrm invoke ExecuteCommand http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx @{command="ps";timeout="60"} -username:root -password:password -auth:basic -r:https://virtubuntu.huis.dom:1270/wsman -skipCACheck -encoding:UTF-8

clip_image004

Happy remote management of your unix systems from Windows :-)

The mystery of the lost registry values

Thursday, September 10th, 2009

During the OpsMgr Health Check engagement we use custom code to assess the customer’s Management group, as I wrote here already. Given that the customer tells us which machine is the RMS, one of the very first things that we do in our tool is to connect to the RMS’s registry, and check the values under HKLM\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup to see which machine holds the database. It is a rather critical piece of information for us, as we run a number of queries afterward… so we need to know where the db is, obviously :-)

I learned from here http://mybsinfo.blogspot.com/2007/01/powershell-remote-registry-and-you-part.html how to access registry remotely thru powershell, by using .Net classes. This is also one of the methods illustrated in this other article on Technet Script Center http://www.microsoft.com/technet/scriptcenter/resources/qanda/jan09/hey0105.mspx 

Therefore the “core” instructions of the function I was using to access the registry looked like the following

  1. Function GetValueFromRegistry ([string]$computername, $regkey, $value)   
  2. {  
  3.      $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $computername)  
  4.      $regKey= $reg.OpenSubKey("$regKey")  
  5.      $result = $regkey.GetValue("$value")  
  6.      return $result 
  7. }  

 

[Note: the actual function is bigger, and contains error handling, and logging, and a number of other things that are unnecessary here]

Therefore, the function was called as follows:
GetValueFromRegistry $RMS "SOFTWARE\\Microsoft\\Microsoft Operations Manager\\3.0\\Setup" "DatabaseServerName"
Now so far so good.

In theory.

 

Now for some reason that I could not immediately explain, we had noticed that this piece of code performing registry accessm while working most of the times, only on SOME occasions was giving errors about not being able to open the registry value…

image

When you are onsite with a customer conducting an assessment, the PFE engineer does not always has the time to troubleshoot the error… as time is critical, we have usually resorted to just running the assessment from ANOTHER machine, and this “solved” the issue… but always left me wondering WHY this was giving an error. I had suspected an issue with permissions first, but it could not be as the permissions were obviously right: performing the assessment from another machine but with the same user was working!

A few days ago my colleague and buddy Stefan Stranger figured out that this was related to the platform architecture:

  • X64 client to x64 RMS was working
  • X64 client to x86 RMS was working
  • X86 client to x86 RMS was working
  • X86 client to x64 RMS was NOT working

You don’t need to use our custom code to reproduce this, REGEDIT shows the behavior as well.

If, from a 64-bit server, you open a remote registry connection to 64-bit RMS server, you can see all OpsMgr registry keys:

clip_image002

If, anyhow, from a 32-bit server, you open a remote registry connection to 64-bit RMS server, you don’t see ALL – but only SOME – OpsMgr registry keys:
clip_image004

So here’s the reason! This is what was happening! How could I not think of this before? It was nothing related to permissions, but to registry redirection! The issue was happening because the 32 bit machine is using the 32bit registry editor and what it will do when accessing a 64bit machine will be to default to the Wow6432Node location in the registry. There all OpsMgr data won’t be in the WOW64 location on a 64bit machine, only some.

So, just like regedit, the 32bit powershell and the 32bit .Net framework were being redirected to the 32bit-compatibility registry keys… not finding the stuff we needed, whereas a 64bit application could find that. Any 32bit application by default gets redirected to a 32bit-safe registry.

So, after finally UNDERSTANDING what the issue was, I started wondering: ok… but how can I access the REAL “HLKM\SOFTWARE\Microsoft” key on a 64bit machine when running this FROM a 32bit machine – WITHOUT being redirected to “HKLM\SOFTWARE\Wow6432Node\Microsoft” ? What if my application CAN deal just fine with those values and actually NEEDs to access them?

The answer wasn’t as easy as the question. I did a bit of digging on this, and still I have NOT yet found a way to do this with the .Net classes. It seems that in a lot of situations, Powershell or even .Net classes are nice and sweet wrappers on the underlying Windows APIs… but for how sweet and easy they are, they are very often not very complete wrappers – letting you do just about enough for most situations, but not quite everything you would or could with the APi underneath. But I digress, here…

The good news is that I did manage to get this working, but I had to resort to using dear old WMI StdRegProvider… There are a number of locations on the Internet mentioning the issue of accessing 32bit registry from 64bit machines or vice versa, but all examples I have found were using VBScript. But I needed it in Powershell. Therefore I started with the VBScript example code that is present here, and I ported it to Powershell.

Handling the WMI COM object from Powershell was slightly less intuitive than in VBScript, and it took me a couple of hours to figure out how to change some stuff, especially this bit that sets the parameters collection:

Set Inparams = objStdRegProv.Methods_("GetStringValue").Inparameters

Inparams.Hdefkey = HKLM

Inparams.Ssubkeyname = RegKey

Inparams.Svaluename = RegValue

Set Outparams = objStdRegProv.ExecMethod_("GetStringValue", Inparams,,objCtx)

INTO this:

$Inparams = ($objStdRegProv.Methods_ | where {$_.name -eq "GetStringValue"}).InParameters.SpawnInstance_()

($Inparams.Properties_ | where {$_.name -eq "Hdefkey"}).Value = $HKLM

($Inparams.Properties_ | where {$_.name -eq "Ssubkeyname"}).Value = $regkey

($Inparams.Properties_ | where {$_.name -eq "Svaluename"}).Value = $value

$Outparams = $objStdRegProv.ExecMethod_("GetStringValue", $Inparams, "", $objNamedValueSet)

 

I have only done limited testing at this point and, even if the actual work now requires nearly 15 lines of code to be performed vs. the previous 3 lines in the .Net implementation, it at least seems to work just fine.

What follows is the complete code of my replacement function, in all its uglyness glory:

 

  1. Function GetValueFromRegistryThruWMI([string]$computername, $regkey, $value)  
  2. {  
  3.     #constant for the HLKM  
  4.     $HKLM = "&h80000002" 
  5.  
  6.     #creates an SwbemNamedValueSet object
  7.     $objNamedValueSet = New-Object -COM "WbemScripting.SWbemNamedValueSet" 
  8.  
  9.     #adds the actual value that will requests the target to provide 64bit-registry info
  10.     $objNamedValueSet.Add("__ProviderArchitecture", 64) | Out-Null 
  11.  
  12.     #back to all the other usual COM objects for WMI that you have used a zillion times in VBScript
  13.     $objLocator = New-Object -COM "Wbemscripting.SWbemLocator" 
  14.     $objServices = $objLocator.ConnectServer($computername,"root\default","","","","","",$objNamedValueSet)  
  15.     $objStdRegProv = $objServices.Get("StdRegProv")  
  16.  
  17.     # Obtain an InParameters object specific to the method.  
  18.     $Inparams = ($objStdRegProv.Methods_ | where {$_.name -eq "GetStringValue"}).InParameters.SpawnInstance_()  
  19.   
  20.     # Add the input parameters  
  21.     ($Inparams.Properties_ | where {$_.name -eq "Hdefkey"}).Value = $HKLM 
  22.     ($Inparams.Properties_ | where {$_.name -eq "Ssubkeyname"}).Value = $regkey 
  23.     ($Inparams.Properties_ | where {$_.name -eq "Svaluename"}).Value = $value 
  24.  
  25.     #Execute the method  
  26.     $Outparams = $objStdRegProv.ExecMethod_("GetStringValue", $Inparams, "", $objNamedValueSet)  
  27.  
  28.     #shows the return value  
  29.     ($Outparams.Properties_ | where {$_.name -eq "ReturnValue"}).Value  
  30.  
  31.     if (($Outparams.Properties_ | where {$_.name -eq "ReturnValue"}).Value -eq 0)  
  32.     {  
  33.        write-host "it worked" 
  34.        $result = ($Outparams.Properties_ | where {$_.name -eq "sValue"}).Value  
  35.        write-host "Result: $result" 
  36.        return $result 
  37.     }  
  38.     else 
  39.     {  
  40.         write-host "nope" 
  41.     }  
  42. }  

 

which can be called similarly to the previous one:
GetValueFromRegistryThruWMI $RMS "SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup" "DatabaseServerName"

[Note: you don’t need the double\escape backslashes here, compared to the .Net implementation]

Enjoy your cross-architecture registry access: from 32bit to 64bit – and back!

SCX Evolutions

Sunday, July 19th, 2009

During the beta of the Cross-Platform extensions and of System Center Operations Manager 2007 R2, the product team had promised to eventually release the SCX Providers'source code.

Now that this promise has been mantained, and the SCX providers have been released on Codeplex at http://xplatproviders.codeplex.com/ it should be finally possible to entirely build your own unsupported agent package, starting from source code, without having to modify the original package as I have shown earlier on this blog.
Of course this will still be unsupported by Microsoft Product support, but will eventually work just fine!
This is an extraordinary event in my opinion, as it is not a common event that Microsoft releases code as open source, especially when this is part of one of the product it sells. I suspect we will see more of this as we going forward.

Also, at R2 release time, some official documentation about buildilng Cross-Plaform Management Packs has been published on Technet.

Anyway, I have in the past posted a number of posts on my blog under this tag http://www.muscetta.com/tag/xplat/ (I will continue to use that tag going forward) which show/describe how I hacked/modified both the existing MPs AND the SCX agent package to let it run on unsupported distributions (and I think they are still useful as they show a number of techniques about how to test, understand and troubleshoot the Xplat agent a bit. In fact, I have first learned how to understand and modify the RedHat MPs to monitor CentOS and eventually even modified the RPM package to run on Ubuntu (which also works on Debian 5/Lenny), eventually, as you can see because I am now using it to monitor – from home, across the Internet – the machine running this blog:

www.muscetta.com Performance in OpsMgr

Or even, with or without OpsMgr 2007 R2, you could write your own scripts to interact with those providers, by using your favourite Scripting Language.

After all, those experimentations with Xplat got me a fame of being a "Unix expert at Microsoft" (this expression still makes me laugh), as I was tweeting here:
Unix expert at Microsoft

But really, I have never hidden my interest for interoperability and the fact that I have been using Linux quite a bit in the past, and still do.

Also, one more related information is that the fine people at Xandros have released their Bridgeways Management Packs and at the same time also started their own blog at http://blog.xplatxperts.com/ where they discuss some troubleshooting techniques for the Xplat agent, both similar to what I have been writing about here and also – of course – specific to their own providers, that are in their XSM namespace.

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Using the SCX Agent with WSMan from Powershell v2

Monday, June 1st, 2009

So Powershell v2 adds a nice bunch of Ws-Man related cmdlets. Let’s see how we can use them to interact with OpenPegasus’s WSMan on a SCX Agent.

PS C:\maint> test-wsman -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

image

But we do get this error:

Test-WSMan : The server certificate on the destination computer (virtubuntu.huis.dom:1270) has the following errors:
The SSL certificate could not be checked for revocation. The server used to check for revocation might be unreachable.

The SSL certificate is signed by an unknown certificate authority.
At line:1 char:11
+ test-wsman <<<<  -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl
+ CategoryInfo          : InvalidOperation: (:) [Test-WSMan], InvalidOperationException
+ FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand

The credentials above have to be a unix login. Which we typed correctly. But we still can't get thru, as the certificate used by the agent is not trusted by our workstation. This seems to be the “usual” issue I first faced when testing SCX with WINRM in beta1. At the time I simply dismissed it with the following sentence

[…] Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM. […]

and I sincerely thought that it would explain pretty well… but eventually a lot of people got confused by this and did not know what to do, especially for the part that goes about trusting the certificate.  Anyway, in the following posts I figured out you could pass the –skipCACheck parameter to WINRM… which solved the issue with having to trust the certificate (which is fine for testing, but I would not use that for automations and scripts running in production… as it might expose your credentials to man-in-the-middle attacks).

So it seems that with the Powershell cmdlets we are back to that issue, as I can’t find a parameter to skip the CA check. Maybe it is there, but with PSv2 not having been released yet, I don't know everything about it, and the CTP documentation is not yet complete. Therefore, back to trusting the certificate.

Trusting the certificate is actually very simple, but it can be a bit tricky when passing those certs back and forth from unix to windows. So let's make the process a bit clearer.

All of the SCX-agents certificates are ultimately signed by a key on the Management server that has discovered them, but I don't currently know where that certificate/key is stored on the management server. Anyway, you can get it from the agent certificate – as you only really need the public key, not the private signing key.

Use WinSCP or any other utility to copy the certificate off one of the agents.
You can find that in the /etc/opt/microsoft/scx/ssl location:

image

that scx-host-computername.pem is your agent certificate.

Copy it to the Management server and change its extension from .pem to .cer. Now Windows will be happy to show it to you with the usual Certificate interface:

image

We need to go to the “Certification Path” tab, select the ISSUER certificate (the one called “SCX-Certificate”):

image

then go to the “Details” tab, and use the “Copy to File” button to export the certificate.

After you have the certificate in a .CER file, you can add it to the “trusted root certification authorities” store on the computer you are running your powershell tests from.

image

So after you have trusted it, the same command as above actually works now:

PS C:\maint> test-wsman -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

wsmid           : http://schemas.dmtf.org/wbem/wsman/identify/1/wsmanidentity.xsd
lang            :
ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
ProductVendor   : Microsoft System Center Cross Platform
ProductVersion  : 1.0.4-248

Ok, we can talk to it! Now we can do something funnier, like actually returning instances and/or calling methods:

PS C:\maint> Get-WSManInstance -computer virtubuntu.huis.dom -authentication basic -credential (get-credential) -port 1270 -usessl -enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx

image

This is far from exhaustive, but should get you started on a world of possibilities about automating diagnostics and responses with Powershell v2 towards the OpsMgr 2007 R2 Cross-Platform machines. Enjoy!

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Installing the OpsMgr 2007 R2 SCX Agent on Ubuntu

Saturday, May 30th, 2009

You know since the beta1 of Xplat I have been busy with modifying the Redhat management pack and monitor CentOS with OpsMgr. Now, CentOS is a distribution that is pretty similar to RedHat, so the RPM package just runs, and it is only a matter of hacking a modified MP.

I never went really further in my experiments, mostly due to lack of time… but then yesterday I got a comment to this older post asking about Ubuntu. Of course I know about Ubuntu, and have been using Debian-based distributions for years. I actually even prefer them over RPM-based distributions such as RedHat or SuSE (personal preference). Heck, even this weblog is running on Debian!

Anyway, I never really tried to see if one of the existing RPM packages for RedHat or SuSE could be modified to run on Ubuntu. I will eventually test this on Debian too, but for now I used Ubuntu which tends to have slightly newer packages and libraries, overall. The machine I tested on is a Ubuntu Server 8.04.2. Older/newer versions might slightly differ.

BEWARE THAT ALL THAT FOLLOWS BELOW IS NOT SUPPORTED BY MICROSOFT. It is only described here for EXPERIMENTAL (==fun) purpose. DO NOT USE THIS IN A PRODUCTION ENVIRONMENT.

So, you are warned. Now let’s hack it.

The first thing to do is to copy the Redhat agent’s RPM package off your OpsMgr2007 R2 server in the “usual” path “C:Program FilesSystem Center Operations manager 2007AgentManagementUnixAgents”. Let’s grab the RHEL5 agent, which is called scx-1.0.4-248.rhel.5.x86.rpm in R2 RTM.

First we need to CONVERT the RPM package to the DEB package format used by Ubuntu, by using the ALIEN package:

sudo apt-get update
sudo apt-get install alien
sudo bash
alien -k scx-1.0.4-248.rhel.5.x86.rpm –scripts
dpkg -i scx_1.0.4-248_i386.deb

image

The converted package will install… but the script execution will fail in a few places – most notably in the generation of the certificate, as it is not able to locate the right openssl libraries, as shown in the screenshot above.

If the libssl.so.6 file cannot be found, you might be missing the “libssl-dev” package, which you can install as follows:

apt-get install libssl-dev

But even if it is installed, you will find that the files are still missing. This is not really true: actually, the files are there, but on Ubuntu they have a different name than on RedHat, that’s all. You can therefore create hardlinks to the “right” files, so that they are aliased and get found afterwards:

cd /usr/lib
ln -s libcrypto.so.0.9.8 libcrypto.so.6
ln -s libssl.so.0.9.8 libssl.so.6

So now when installing the package, the certificate generation will work:

image

You are nearly ready to go. You have to start the service by using the init scripts – the “service” command is RedHat-specific, that will still fail.

/etc/init.d/scx-cimd start is the “standard” way of starting daemons from init on Unix.

But it still fails, as it seems that the init script provided in the RedHat package is really searching for a file called “functions” which is present on RedHat and on CentOS, which provides re-usable functions for startup scripts to include:

image

How do you fix this? I just copied the /etc/init.d/functions file from a CentOS box to my Ubuntu box.

I copied it via SCP from the CentOS box I have:

cd /etc/init.d

scp root@centos.huis.dom:/etc/init.d/functions .

You can probably also find and fetch the file from the Internet (both CentOS and RedHat should have accessible repositories with all the files in their distributions, since it is open sourced).

After you have the file in place, the init script will be able to include it, will find the functions it needs, and the daemon/service will now start (even if with minor errors I have not investigated for now, but that don’t seem to be causing troubles):

image

and here you can see it is finally running:

image

So let’s try to issue a few queries as shown in a previous posts:

image

IT WORKS!!!

But… there is a “but”: not all classes actually return instances and values just yet. Most notably the “SCX_OperatingSystem” class does not seem to return anything right awy. That is a very important class, because is the one we would use to first discover the Operating System object in the Management Packs. So we need to fix it. The reason why the class does not return anything, is that the SCX provider is looking into the /etc/redhat-release file to return what OS version/distribution the machine is running. And the file is obviously not there on Ubuntu.

On all Linuxes there is a similar file, called /etc/issue… which again, we can copy with the other name and trick the provider into working:

cd /etc

cp issue redhat-release

And NOW, the SCX_OperatingSystem Class also returns an instance:

image

The next step would be “cooking” an MP to discover Ubuntu. More on this on a later post (maybe). I did not test all classes and their implementation… you can try to poke at them by following the instructions and commands on my previous post here. But this should get you started.

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Cross Platform in OpsMgr 2007 R2 Release Candidate

Friday, March 27th, 2009

You have heard it all over the place, System Center Operations Manager 2007 R2 has reached the Release Candidate milestone and the RC bits have been made available on connect.microsoft.com.

As it is becoming a tradition for me with each new release, I want to take a look at the Unix Monitoring stuff like I did since beta1 of Xplat, passing thru beta2. I am an integration freak and I have always insisted that interoperability is key. I will leave the most obvious “release notes” kind of things out of here, such as saying that there are now agents for the x64 version of linux distro’s, and so on…. you can read this stuff in the release notes already and in a zillion of other places.

Let’s instead look at my first impression ( = I am amazed: this product is really getting awesome) and let’s do a bit of digging, mostly to note what changed since my previous posts on Xplat (which, by the way, is the MOST visited post on this blog I ever published) – of course there is A LOT more that has changed under the hood… but those are code changes, improvements, polishing of the product itself… while that would be interesting from a code perspective, here I am more interested in what the final user (the System Administrator) will ultimately interact with directly, and what he might need to troubleshoot and understand how the pieces fit together to realize Unix Monitoring in OpsMgr.

After having hacked the RedHat MP to work on my CentOS box (as usual), I started to take a look at what is installed on the Linux box. Here are the new services:

ps -Af | grep scx

You will notice the daemons have changed names and get launched with new parameters.

Of course when you see who uses port 1270 everything becomes clearer:

netstat -anp | grep 1270

Therefore I can place the two new names and understand that SCXCIMSERVER is the WSMAN implementation, while SCXCIMPROVAGT is the CIM/WBEM implementation.

There is one more difference at the “service” (or “daemon”) level: the fact that there is only ONE init script now: /etc/init.d/scx-cimd

/etc/init.d/scx-cimd

So basically the SCX “Agent” will start and stop as a single thing, even if it is composed of multiple executables that will spawn various processes.

Another difference: if we look in “familiar” locations like /etc/opt/microsoft/scx/bin/tools/ we see that a number of configuration files is either empty (0 bytes) or missing (like the one described on Ander’s blog to enable verbose logging of WSMan requests), when compared to earlier versions:

/etc/opt/microsoft/scx/conf

But that is because I have been told we now have a nice new tool called scxadmin under /opt/microsoft/scx/bin/tools/ , which will let you configure those things:

/opt/microsoft/scx/bin/tools/scxadmin

Therefore you would enable VERBOSE logging for all components by issuing the command

./scxadmin -log-set all verbose

and you will bring it back to a less noisy setting of logging only errors with

./scxadmin -log-set all errors

the logs will be written under /var/opt/microsoft/scx/log just like they did before.

Other than this, a lot of the troubleshooting techniques I showed in one of my previous posts, like how to query CIM classes directly or thru WSMAN remotely by using winrm – they should really stay the same. I will mention them again here for reference.

SCXCIMCLI is a useful and simple tool used to query CIM directly. You can roughly compare it to wbemtest.exe in the WIndows world (other than not having a UI). This utility can also be found in /opt/microsoft/scx/bin/tools

A couple of examples of the most common/useful things you would do with scxcimcli:

1) Enumerate all Classes whose name contains “SCX_” in the root/scx namespace (the classes our Management packs use):

./scxcimcli nc -n root/scx -di |grep SCX_ | sort

./scxcimcli nc -n root/scx -di |grep SCX | sort

2) Execute a Query

./scxcimcli xq "select * from SCX_OperatingSystem" -n root/scx

./scxcimcli xq "select * from SCX_OperatingSystem" -n root/scx

Also another thing that you might want to test when troubleshooting discoveries, is running the same queries through WS-Man (possibly from the same Management Server that will or should be managing that unix box). I already showed this in the past, it is the following command:

winrm enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx -username:root -password:password -r:https://linuxbox.mydomain.com:1270/wsman -auth:basic –skipCACheck

but if you launch it that way it will now return an error like the following (or at least it did in my test lab):

Fault
Code
Value = SOAP-ENV:Sender
Subcode
Value = wsman:EncodingLimit
Reason
Text = UTF-16 is not supported; Please use UTF-8
Detail
FaultDetail = http://schemas.dmtf.org/wbem/wsman/1/wsman/faultDetail/CharacterSet

Error number:  -2144108468 0x8033804C
The WS-Management service does not support the character set used in the request
. Change the request to use UTF-8 or UTF-16.

the error message is pretty self explanatory: you need to specify the UTF-8 Character set. You can do it by adding the “-encoding” qualifier:

winrm enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx -username:root -password:password -r:https://linuxbox.mydomain.com:1270/wsman -auth:basic –skipCACheck –encoding:UTF-8

Hope the above is useful to figure out the differences between the earlier beta releases of the System Center CrossPlatform extensions and the version built in OpsMgr 2007 R2 Release Candidate.

There are obviously a million of other things in R2 worth writing about (either related to the Unix monitoring or to everything else) and I am sure posts will start to appear on the many, more active, blogs out there (they have already started appearing, actually). I have not had time to dig further, but will likely do so AFTER Easter – as the next couple of weeks I will be travelling, working some of the time (but without my test environment and good connectivity) AND visiting relatives the rest of the time.

One last thing I noticed about the Unix/Cross Platform Management Packs in R2 Release Candidate… their current “release date” exposed by the MP Catalog Web Service is the 20th of March

image

…which happens to be my Birthday – therefore they must be a present for me! :-)

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Early Adoptions, Health Checks and New Year Rants.

Tuesday, December 30th, 2008

Generations

Two days ago I read the following Tweet by Hugh MacLeod:

"[…] Early Adopter Problem: How to differentiate from the bandwagon, once the bandwagon starts moving faster than you are […]"

That makes me think of early adoption of a few technologies I have been working with, and how the community around those evolved. For example:

Operations Manager… early adoption meant that I have been working with it since the beta, had posted one of the earliest posts about how to use a script in a Unit Monitor back in may 2007 (the product was released in April 2007 and there was NO documentation back then, so we had to really try to figure out everything…), but someone seems to think it is worth repeating the very same lesson in November 2008, with not a lot of changes, as I wrote here. I don't mean being rude to Anders… repeating things will surely help the late adopters finding the information they need, of course.

Also, I started playing early with Powershell. I posted my first (and only) cmdlet back in 2006. It was not a lot more than a test for myself to learn how to write one, but that's just to say that I started playing early with it. I have been using it to automate tasks for example.

Going back to the quote above, everyone gets on the bandwagon posting examples and articles. I had been asked a few times about writing articles on OpsMgr and Powershell usage (for example by www.powershell.it) but I declined, as I was too busy using this knowledge to do stuff for work (where “work” is defined as in “work that pays your mortgage”), rather than seeking personal prestige through articles and blogs. Anyway, that kind of articles are appearing now all over the Internet and the blogosphere now. The above examples made me think of early adoption, and the bandwagon that follows later on… but even as an early adopter, I was never very noisy or visible.

Now, going back to what I do for work, (which I mentioned here and here in the past), I work in the Premier Field Engineering organization of Microsoft Services, which provides Premier services to customers. Microsoft Premier customer have a wide range of Premier agreement features and components that they can use to support their people, improve their processes, and improve the productive use of the Microsoft technology they have purchased. Some of these services we provide are known to the world as “Health Checks”, some as “Risk Assessment Programs” (or, shortly, RAPs). These are basically services where one of our technology experts goes on the customer site and there he uses a custom, private Microsoft tool to gather a huge amount of data from the product we mean to look at (be it SQL, Exchange, AD or anything else….). The Health Check or RAP tool collects the data and outputs a draft of the report that will be delivered to the customer later on, with all the right sections and chapters. This is done so that every report of the same kind will look consistent, even if the engagement is performed by a different engineer in a different part of the world. The engineer will of course analyze the collected data and write recommendations about what is configured properly and/or about what could or should be changed and/or improved in the implementation to make it adhere to Best Practices. To make sure only the right people actually go onsite to do this job we have a strict internal accreditation process that must be followed; only accredited resources that know the product well enough and know exactly how to interpret the data that the tool collects are allowed to use it and to deliver the engagement, and present/write the findings to the customer.

So why am I telling you this here, and how have I been using my early knowledge of OpsMgr and Powershell for ?

I have used that to write the Operations Manager Health Check, of course!

We had a MOM 2005 Health Check already, but since the technology has changed so much, from MOM to OpsMgr, we had to write a completely new tool. Jeff  (the original MOM2005 author, who does not have a blog that I can link to) and me are the main coders of this tool… and the tool itself is A POWERSHELL script. A longish one, of course (7000 lines, more or less), but nothing more than a Powershell script, at the end of the day. There are a few more colleagues that helped shape the features and tested the tool, including Kevin Holman. Some of the database queries on Kevin’s blog are in fact what we use to extract some of the data (beware that some of those queries have recently been updated, in case you saved them and using your local copy!), while some other information are using internal and/or custom queries. Some other times we use OpsMgr cmdlets or go to the SDK service, but a lot of times we query the database directly (we really should use the SDK all the times, but for certain stuff direct database access is way faster). It took most of the past year to write it, test it, troubleshoot it, fix it, and deliver the first engagements as “beta” to some customers to help iron out the process… and now the delivery is available! If a year seems like a long time, you have to consider this is all work that gets done next to what we all have to normally do with customers, not replacing it (i.e. I am not free to sit on my butt all day and just write the tool… I still have to deliver services to customers day in day out, in the meantime).

Occasionally, during this past calendar year, that is approaching its end, I have been willing and have found some extra time to disclose some bits and pieces, techniques and prototypes of how to use Powershell and OpsMgr together, such as innovative ways to use Powershell in OpsMgr against beta features, but in general most of my early adopter’s investment went into the private tool for this engagement, and that is one of the reasons I couldn’t blog or write much about it, being it Microsoft Intellectual Property.

But it is also true that I did not care to write other stuff when I considered it too easy or it could be found in the documentation. I like writing of ideas, thoughts, rants OR things that I discover and that are not well documented at the time I study them… so when I figure out things I might like leaving a trail for some to follow. But I am not here to spoon feed people like some in the bandwagon are doing. Now the bandwagon is busy blogging and writing continuously about some aspect of OpsMgr (known or unknown, documented or not), and the answer to the original question of Hugh is, in my opinion, that it does not really matter what the bandwagon is doing right now. I was never here to do the same thing. I think that is my differentiator. I am not saying that what a bunch of colleagues and enthusiasts is doing is not useful: blogging and writing about various things they experiment with is interesting and it will be useful to people. But blogs are useful until a certain limit. I think that blogs are best suited for conversations and thoughts (rather than for "howto's"), and what I would love to see instead is: less marketing hype when new versions are announced and more real, official documentation.

But I think I should stop caring about what the bandwagon is doing, because that's just another ego trip at the end of the day. What I should more sensibly do, would be listening to my horoscope instead:

[…] "How do you slay the dragon?" journalist Bill Moyers asked mythologist Joseph Campbell in an interview. By "dragon," he was referring to the dangerous beast that symbolizes the most unripe and uncontrollable part of each of our lives. In reply to Moyers, Campbell didn't suggest that you become a master warrior, nor did he recommend that you cultivate high levels of sleek, savage anger. "Follow your bliss," he said simply. Personally, I don't know if that's enough to slay the dragon — I'm inclined to believe that you also have to take some defensive measures — but it's definitely worth an extended experiment. Would you consider trying that in 2009? […]