Capturing your knowledge/intelligence should be SIMPLE

Lately this blog has been very personal. This post is about stuff I do at work, so if you are not one of my IT readers, don't worry.

For my IT readers, an interruptions from guitars and music on this blog to share some personal reflection on OpInsights and SCOM.

SCOM is very powerful. You know I have always been a huge fan of 2007 and worked myself on the 2012 release. But, compared to its predecessor – MOM – in SCOM it has always been very hard to author management packs – multiple tools, a lot of documentation… here we are, more than 6 years later, and the first 2 comments on an old post on the momteam blog still strike me hard every time I read it:

whatever happened to click,click,done?

You would think that things have changed, but SCOM is fundamentally complex, and even with the advances in tooling (VSAE, MPAuthor, etc) writing MPs is still black magic, if you ask some users.

I already blogged about me exporting and MP and converting its event-based alerting rules to OpInsights searches.

Well, writing those alerting rules in SCOM needs a lot of complex XML – you might not need to know how to write it (but you often have to attempt dechipering it) and even if you create rules with a wizard, it will produce a lot of complex XML for you.

In the screenshot below, the large XML chunk that is needed to pick up a specific eventId from a specific log and a specific source: the key/important information is only a small fraction of it, while the rest is ‘packaging’:

image

I want OpInsights to be SIMPLE.

If there is one thing I want the most for this project, is this.

That's why the same rule can now be expressed with a simple filter search in OpInsights, where all you need is just that key information

EventID=1037 Source="Microsoft-Windows-IIS-W3SVC" EventLog=System

and you essentially don't have to care about any sort of packaging nor mess with XML.

Click, click – filters/facets in the UI let you refine your criteria. And your saved searches too. And they execute right away, there is not even a ‘Done’ button to press. You might just be watching those searches pinned to tiles in your dashboard. All it took was identify the three key pieces of info, no complex XML wrapping needed!

Ok, granted – there ARE legitimate, more complex, scenarios for which you need complex data sources/collectors and specialized/well thought data shaping, not just events – and we use those powerful capabilities of the MMA agent in intelligence packs. But at its core, the simple search language and explor-ability of the data are meant to bring back SIMPLE to the modern monitoring world. Help us prioritize what data sources you need first!

PS – if you have no idea what I was talking about – thanks for making it till here, but don’t worry: either you are not an IT person, which means simply ignore this; or – if you are an IT person – go check out Azure Operational Insights!

Operations Manager 2012 SP1 BETA is out, and some cool things you might not (yet) know about it

It has been a couple of months since we released the CTP2 release (I had blogged about that here http://www.muscetta.com/2012/06/16/operations-manager-2012-sp1-ctp2-is-out-and-my-teched-na-talk-mgt302/ ) and we have now reached the Beta milestone!

Albeit you might have already seen a number of posts about this last week (i.e. http://blogs.technet.com/b/server-cloud/archive/2012/09/10/system-center-2012-sp1-beta-available-evaluate-with-windows-server-2012.aspx or http://blogs.technet.com/b/momteam/archive/2012/09/11/system-center-2012-service-pack-1-beta-now-available-for-download.aspx), I see the information on the blogs so far didn’t quite explain all the various new features that went into it, and I want to give a better summary specifically about the component that I work on: Operations Manager.

Keep in mind the below is just my personal summary – the official one is here http://technet.microsoft.com/en-us/library/jj656650.aspx – and it actually does explain these things… but since some OpsMgr community reads a lot of blogs, I wanted to highlight some points of this release.

Platform Support

  • Support for installing the product on Windows Server 2012 for all components: agent, server, databases, etc.
  • Support for using SQL Server 2012 to host the databases

Cloud Services

  • Global Service Monitor – This is actually something that Beta version enables, but the required MPs don’t currently ship with the Beta download directly – you will be able to sign up for the Beta of GSM here. Once you have registered and imported the new MPs, you will be able to use our cloud based capability to monitor the health of your web applications from geo-distributed perspective that Microsoft manages and runs on Windows Azure, just like you would from your own agent/watcher nodes. Think of it as an extension of your network, or “watcher nodes in the cloud”

APM-Related improvements

this is my area and what myself and the team I am in specifically works on – so I personally had the privilege to drive some of this work (not all – some other PMs drove some of this too!)

  • Support for IIS8 with APM (.NET application performance monitoring) – this enables APM to monitor applications running on Windows Server 2012, not just 2008 anymore. The new Windows Server 2012 and IIS8 Management packs are required for this to work. Please note that, if you have imported the previous, “Beta” Windows 8 Management packs, they will need to be removed prior to installing the official Windows Server 2012 Management Packs. About Windows Server 2012 support and MPs, read more here http://blogs.technet.com/b/momteam/archive/2012/09/05/windows-server-2012-system-center-operations-manager-support.aspx
  • Monitoring of WCF, ASP.NET MVC and .NET NT services – we made changes to the agent so that we better understand and present data related to calls to WCF Services, we support monitoring of ASP.NET MVC applications, and we enabled monitoring of Windows Services that are built on the .NET framework – the APM documentation here is updated in regards to these changes and refers to both 2012 RTM and SP1 (pointing out the differences, when needed) http://technet.microsoft.com/en-us/library/hh457578.aspx
  • Introduction of Azure SDK support – this means you can monitor applications that make use of Azure Storage with APM, and the agent is now aware of Azure tables, blobs, queues as SQL Azure calls. It essentially means that APM events will tell you things like “your app was slow when copying that azure blob” or “you got an access denied when writing to that table”
  • 360 .NET Application Monitoring Dashboards – this brings together different perspectives of application health in one place: it displays information from Global Service Monitor, .NET Application Performance Monitoring, and Web Application Availability Monitoring to provide a summary of health and key metrics for 3-tier applications in a single view. Documentation here http://technet.microsoft.com/en-us/library/jj614613.aspx
  • Monitoring of SharePoint 2010 with APM (.NET application performance monitoring) – this was a very common ask from the customers and field, and some folks were trying to come up with manual configurations to enable it (i.e. http://blogs.technet.com/b/shawngibbs/archive/2012/03/01/system-center-2012-operation-manager-apm.aspx ) but now this comes out of the box and it is, in fact, better than what you could configure: we had to change some of the agent code, not just configuration, to deal with some intricacies of Sharepoint…
  • Integration with Team Foundation Server 2010 and Team Foundation Server 2012 – functionality has also been enhanced in comparison to the previous TFS Synchronization management pack (which was shipped out of band, now it is part of Operations Manager). It allows Operations teams to forward APM alerts ( http://blogs.technet.com/b/momteam/archive/2012/01/23/custom-apm-rules-for-granular-alerting.aspx ) to Developers in the form of TFS Work Items, for things that operations teams might not be able to address (i.e. exceptions or performance events that could require fixes/code changes)
  • Conversion of Application Performance Monitoring events to IntelliTrace format – this enables developers to get information about exceptions from their applications in a format that can be natively used in Visual Studio. Documentation for this feature is not yet available, and it will likely appear as we approach the final release of the Service Pack 1. This is another great integration point between Operations and Development teams and tools, contributing to our DevOps story (my personal take on which was the subject of an earlier post of mine: http://www.muscetta.com/2012/02/05/apm-in-opsmgr-2012-for-dev-and-for-ops/)

Unix/Linux Improvements

Audit Collection Services

  • Support for Dynamic Access Control in Windows Server 2012 – When was the last time that an update to ACS was made? Seems like a long time ago to me…. Windows Server 2012 enhances the existing Windows ACL model to support Dynamic Access Control. System Center 2012 Service Pack 1 (SP1) contributes to the fulfilling these scenarios by providing enterprise-wide visibility into the use of the Dynamic Access Control.

Network Monitoring

  • Additional network devices models supported – new models have been tested and added to the supported list
  • Visibility into virtual network switches in vicinity dashboard – this requires integration with Virtual Machine Manager to discover the network switches exposed by the hypervisor

 

 

Reminders:

  • Production use is NOT supported for customers who are not part of the TAP program
  • Upgrade from CTP2 to Beta is NOT Supported
  • Upgrade from 2012 RTM to SP1 Beta will ONLY be supported for customers participating in the TAP Program
  • Procedures not covered in the documentation might not work

 

 

 

Download http://www.microsoft.com/en-us/download/details.aspx?id=34607

Operations Manager 2012 SP1 CTP2 is out, and my TechED NA talk (MGT302)

As you might have already heard, this has been an amazing week at TechEd North America: System Center 2012 has been voted as the Best Microsoft Product at TechEd, and we have released the Community Technology Preview (CTP2) of all System Center 2012 SP1 components.

I wrote a (quick) list of the changes in Operations Manager CTP2 in this other blog post and many of those are related to APM (formerly AVIcode technology). I have also demoed some of these changes in my session on thursday – you can watch the recording here. I think one of the most-awaited change is support for monitoring Windows Services written in .NET – but there is more than that!

In the talk I also covered a bit of Java monitoring (which is the same as in 2012, no changes in SP1) and my colleague  Åke Pettersson talked about Synthetic Transactions, and how to bring all together (synthetic and APM) in a single new dashboard (also shipping in SP1 CTP2) that gives you a 360 degrees view of your applications. The CTP2 documentation covers both the changes to APM as well as how to light up this new dashboard.

When it comes to synthetics  – I know you have been using them from your own agents/watcher nodes – but to have a complete picture from the outside in (or last mile), we have now also announced the Beta of Global Service Monitoring (it was even featured in the Keynote!) – where essentially we extend your OpsMgr infrastructure to the cloud, and allow you to upload your tests to our Azure-based service and we will run those tests against your Internet-facing applications from our watcher nodes in various datacenters around the globe and feed back the data to your OpsMgr infrastructure, so that you can see how your application is available and responding from those locations. You can sign up for the consumer preview of GSM from the connect site.

Enjoy your beta testing! (Isn’t that what weekends are for, geeks?)

Operations Manager 2012 Release Candidate is out of the bag!

Go read the announcement at http://blogs.technet.com/b/server-cloud/archive/2011/11/10/system-center-operations-manager-2012-release-candidate-from-the-datacenter-to-the-cloud.aspx

This is the first public release since I am part of the team (I started in this role the day after the team had shipped Beta) and this is the first release that contains some direct output of my work. It feels so good!

Documentation has also been refreshed – it starts here http://technet.microsoft.com/en-us/library/hh205987.aspx

The part specifically about the APM feature is here http://technet.microsoft.com/en-us/library/hh457578.aspx

Enjoy!

Repost: Useful SetSPN tips

I just saw that my former colleague (PFE) Tristan has posted an interesting note about the use of SetSPN “–A” vs SetSPN “–S”. I normally don’t repost other people’s content, but I thought this would be useful as there are a few SPN used in OpsMgr and it is not always easy to get them all right… and you can find a few tricks I was not aware of, by reading his post.

Check out the original post at http://blogs.technet.com/b/tristank/archive/2011/10/10/psa-you-really-need-to-update-your-kerberos-setup-documentation.aspx

I have been chosen; Farewell my friends…

I have been in Premier Field Engineering for nearly 7 years (it was not even called PFE when I joined – it was just "another type of support"…) and I have to admit that it has been a fun, fun ride: I worked with awesome people and managed to make a difference with our products and services for many customers – directly working with some of those customers, as well as indirectly thru the OpsMgr Health Check program – the service I led for the last 3+ years, which nowadays gets delivered hundreds of times a year around the globe by my other fellow PFEs.

But it is time to move on: I have decided to go thru a big life change for me and my family, and I won't be working as a Premier Field Engineer anymore as of next week.

But don't panic – I am staying at Microsoft!

I have actually never been closer to Microsoft than now: we are packing and moving to Seattle the coming weekend, and on July 18th I will start working as a Program Manager in the Operations Manager product team, in Redmond. I am hoping this will enable me to make a difference with even more customers.

Exciting times ahead – wish me luck!

 

That said – PFE is hiring! If you are interested in working for Microsoft – we have open positions (including my vacant position in Italy) for almost all the Microsoft technologies. Simply visit http://careers.microsoft.com and search on “PFE”.

As for the OpsMgr Health Check, don't you worry: it will continue being improved – I left it in the hands of some capable colleagues: Bruno Gabrielli, Stefan Stranger and Tim McFadden – and they have a plan and commitment to update it to OpsMgr 2012.

OpsMgr Agents and Gateways Failover Queries

The following article by Jimmy Harper explains very well how to set up agents and gateways’ failover paths thru Powershell http://blogs.technet.com/b/jimmyharper/archive/2010/07/23/powershell-commands-to-configure-gateway-server-agent-failover.aspx . This is the approach I also recommend, and that article is great – I encourage you to check it out if you haven’t done it yet!

Anyhow, when checking for the actual failover paths that have been configured, the use of Powershell suggested by Jimmy is rather slow – especially if your agent count is high. In the Operations Manager Health Check tool I was also using that technique at the beginning, but eventually moved to the use of SQL queries just for performance reasons. Since then, we have been using these SQL queries quite successfully for about 3 years now.

But this the season of giving… and I guess SQL Queries can be a gift, right? Therefore I am now donating them as Christmas Gift to the OpsMrg community Smile

Enjoy – and Merry Christmas!

 

--GetAgentForWhichServerIsPrimary
SELECT SourceBME.DisplayName as Agent,TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceCommunication() 
AND SourceBME.DisplayName not in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND SourceBME.DisplayName not in (select DisplayName from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.ManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'


--GetAgentForWhichServerIsFailover
SELECT SourceBME.DisplayName as Agent,TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceSecondaryCommunication() 
AND SourceBME.DisplayName not in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND SourceBME.DisplayName not in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.ManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'


--GetGatewayForWhichServerIsPrimary
SELECT SourceBME.DisplayName as Gateway, TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceCommunication() 
AND SourceBME.DisplayName in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'
    

--GetGatewayForWhichServerIsFailover
SELECT SourceBME.DisplayName As Gateway, TargetBME.DisplayName as Server
FROM Relationship R WITH (NOLOCK) 
JOIN BaseManagedEntity SourceBME 
ON R.SourceEntityID = SourceBME.BaseManagedEntityID 
JOIN BaseManagedEntity TargetBME 
ON R.TargetEntityID = TargetBME.BaseManagedEntityID 
WHERE R.RelationshipTypeId = dbo.fn_ManagedTypeId_MicrosoftSystemCenterHealthServiceSecondaryCommunication() 
AND SourceBME.DisplayName in (select DisplayName 
from dbo.ManagedEntityGenericView WITH (NOLOCK) 
where MonitoringClassId in (select ManagedTypeId 
from dbo.ManagedType WITH (NOLOCK) 
where TypeName = 'Microsoft.SystemCenter.GatewayManagementServer') 
and IsDeleted ='0') 
AND R.IsDeleted = '0'


--xplat agents
select bme2.DisplayName as XPlatAgent, bme.DisplayName as Server
from dbo.Relationship r with (nolock) 
join dbo.RelationshipType rt with (nolock) 
on r.RelationshipTypeId = rt.RelationshipTypeId 
join dbo.BasemanagedEntity bme with (nolock) 
on bme.basemanagedentityid = r.SourceEntityId 
join dbo.BasemanagedEntity bme2 with (nolock) 
on r.TargetEntityId = bme2.BaseManagedEntityId 
where rt.RelationshipTypeName = 'Microsoft.SystemCenter.HealthServiceManagesEntity' 
and bme.IsDeleted = 0 
and r.IsDeleted = 0 
and bme2.basemanagedtypeid in (SELECT DerivedTypeId 
FROM DerivedManagedTypes with (nolock) 
WHERE BaseTypeId = (select managedtypeid 
from managedtype where typename = 'Microsoft.Unix.Computer') 
and DerivedIsAbstract = 0)

Got Orphaned OpsMgr Objects?

Have you ever wondered what would happen if, in Operations Manager, you’d delete a Management Server or Gateway that managed objects (such as network devices) or has agents pointing uniquely to it as their primary server?

The answer is simple, but not very pleasant: you get ORPHANED objects, which will linger in the database but you won’t be able to “see” or re-assign anymore from the GUI.

So the first thing I want to share is a query to determine IF you have any of those orphaned agents. Or even if you know, since you are not able to "see" them from the console, you might have to dig their name out of the database. Here's a query I got from a colleague in our reactive support team:


-- Check for orphaned health services (e.g. agent).
declare @DiscoverySourceId uniqueidentifier;
SET @DiscoverySourceId = dbo.fn_DiscoverySourceId_User();
SELECT TME.[TypedManagedEntityid], HS.PrincipalName
FROM MTV_HealthService HS
INNER JOIN dbo.[BaseManagedEntity] BHS WITH(nolock)
ON BHS.[BaseManagedEntityId] = HS.[BaseManagedEntityId]
-- get host managed computer instances
INNER JOIN dbo.[TypedManagedEntity] TME WITH(nolock)
ON TME.[BaseManagedEntityId] = BHS.[TopLevelHostEntityId]
AND TME.[IsDeleted] = 0
INNER JOIN dbo.[DerivedManagedTypes] DMT WITH(nolock)
ON DMT.[DerivedTypeId] = TME.[ManagedTypeId]
INNER JOIN dbo.[ManagedType] BT WITH(nolock)
ON DMT.[BaseTypeId] = BT.[ManagedTypeId]
AND BT.[TypeName] = N'Microsoft.Windows.Computer'
-- only with missing primary
LEFT OUTER JOIN dbo.Relationship HSC WITH(nolock)
ON HSC.[SourceEntityId] = HS.[BaseManagedEntityId]
AND HSC.[RelationshipTypeId] = dbo.fn_RelationshipTypeId_HealthServiceCommunication()
AND HSC.[IsDeleted] = 0
INNER JOIN DiscoverySourceToTypedManagedEntity DSTME WITH(nolock)
ON DSTME.[TypedManagedEntityId] = TME.[TypedManagedEntityId]
AND DSTME.[DiscoverySourceId] = @DiscoverySourceId
WHERE HS.[IsAgent] = 1
AND HSC.[RelationshipId] IS NULL;

Once you have identified the agent you need to re-assign to a new management server, this is doable from the SDK. Below is a powershell script I wrote which will re-assign it to the RMS. It has to run from within the OpsMgr Command Shell.
You still need to change the logic which chooses which agent – this is meant as a starting base… you could easily expand it into accepting parameters and/or consuming an input text file, or using a different Management Server than the RMS… you get the point.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$_.name –like "*Microsoft.SystemCenter.HealthServiceCommunication*"}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“HealthService”)
  6. $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  7. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  8. {
  9.     #the next line should be changed to pick the right agent to re-assign
  10.     if ($obj.DisplayName -match 'dsxlab')
  11.     {
  12.                 Write-host $obj.displayname
  13.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  14.                 $cmro.SetSource($obj)
  15.                 $cmro.SetTarget($rms)
  16.                 $imdd.Add($cmro)
  17.                 $imdd.Commit($mc)
  18.     }
  19. }

Similarly, you might get orphaned network devices. The script below is used to re-assign all Network Devices to the RMS. This script is actually something I have had even before the other one (yes, it has been sitting in my "digital drawer" for a couple of years or more…) and uses the same concept – only you might notice that the relation's source and target are "reversed", since the relationships are different:

  • the Management Server (source) "manages" the Network Device (target)
  • the Agent (source) "talks" to the Management Server (target)

With a bit of added logic it should be easy to have it work for specific devices.

  1. $mg = (get-managementgroupconnection).managementgroup
  2. $mrc = Get-RelationshipClass | where {$_.name –like "*Microsoft.SystemCenter.HealthServiceShouldManageEntity*"}
  3. $cmro = new-object Microsoft.EnterpriseManagement.Monitoring.CustomMonitoringRelationshipObject($mrc)
  4. $rms = (get-rootmanagementserver).HostedHealthService
  5. $deviceclass = $mg.getmonitoringclass(“NetworkDevice”)
  6. Foreach ($obj in $mg.GetMonitoringObjects($deviceclass))
  7. {
  8.                 Write-host $obj.displayname
  9.                 $imdd = new-object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalMonitoringDiscoveryData
  10.                 $cmro.SetSource($rms)
  11.                 $cmro.SetTarget($obj)
  12.                 $imdd.Add($cmro)
  13.                 $mc = Get-connector | where {$_.Name –like “*MOM Internal Connector*”}
  14.                 $imdd.Commit($mc)
  15. }

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Microsoft.Linux.RHEL.5.LogicalDisk.DiskBytesPerSecond Type Mismatch

I have had the following in my notes for a while… and I have not blogged in a while (been too busy) so I decided to blog it today, before the topic gets too old and starts stinking Smile

 

It all started when a customer showed me an Alert he was seeing in his environment from some XPlat workflow. The alert looks like the following:

Generic Performance Mapper Module Failed Execution
Alert Description Source: RLWSCOM02.domain.dom
Module was unable to convert parameter to a double value
Original parameter: '$Data///*[local-name()="BytesPerSecond"]$'
Parameter after $Data replacement: "
Error: 0x80020005
Details: Type mismatch.
One or more workflows were affected by this.
Workflow name: Microsoft.Linux.RHEL.5.LogicalDisk.DiskBytesPerSecond.Collection
Instance name: /
Instance ID: {4F6FA8F5-C56F-4C9B-ED36-12DAFF4073D1}
Management group: DataCenter
Path: RLWSCOM02.domain.dom\RLWSCOM02.domain.dom Alert Rule: Generic Performance Mapper Module Runtime Failure Created: 6/28/2010 11:30:28 PM

 

First I stumbled into this forum post which mentions he same symptom http://social.technet.microsoft.com/Forums/en-US/crossplatformgeneral/thread/62e0bf3e-be6f-4218-a37b-f1e66f02aa49 – but when looking at the resolution, the locale on the customer machine was good (== set to US settings), so I concluded that it was not the same root cause.

 

Then I looked at what that rule was supposed to do, and queried the same CIM class both remotely thru WS-Man and locally via CIM, and concluded that my issue was that certain values were returning as NULL while we were expecting to see a number on the Management Server – therefore the Type Mismatch!

I have explained previously how to run CIM queries against the XPlat agent; in this case it was the following one:

winrm enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_FileSystemStatisticalInformation?__cimnamespace=root/scx -username:scomuser -password:password -r:https://rllspago01.domain.dom:1270/wsman -auth:basic –skipCACheck -skipCNCheck

 

SCX_FileSystemStatisticalInformation

AverageDiskQueueLength = null

AverageTransferTime = null

BytesPerSecond = null

Caption = File system information

Description = Performance statistics related to a logical unit of secondary storage

ElementName = null

FreeMegabytes = 4007

IsAggregate = false

IsOnline = true

Name = /

PercentBusyTime = null

PercentFreeSpace = 55

PercentIdleTime = null

PercentUsedSpace = 45

ReadBytesPerSecond = null

ReadsPerSecond = null

TransfersPerSecond = null

UsedMegabytes = 3278

WriteBytesPerSecond = null

WritesPerSecond = null

 

See the NULLs ? Those are our issue.

Now, before you continue reading, I will tell you that I have investigated this also internally, and apparently we have just (in Cumulative Update 3) changed this behaviour in our XPlat modules, so that when NULL is returned, we consider it to be ZERO. Good or bad that is, it will at least take care of the error. But if you don’t get any data from the Unix system… well, you are not getting any data – so that might cause a surprise later on when you go and look at those charts and expect to see your disk “performance counters” but in fact all you have is a bunch of ZERO’s (how very interesting!). So, basically, the fix in CU3 suppresses the symptom, but does not address the cause.

So, let’s see what is actually causing this, as you might well want to get those statistics, or probably you would not be monitoring that server!

I looked at the Cimd.log (set to verbose) only says the following (basically not much: is getting info for 3 partitions… and the provider code is working)

2010-09-01T08:38:32,796Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances()

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] Object Path = //rllspago01.domain.dom/root/scx:SCX_FileSystemStatisticalInformation

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – Calling DoEnumInstances()

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] DiskProvider DoEnumInstances

2010-09-01T08:38:33,359Z Trace      [scx.core.providers.diskprovider:5964:3086830480] DiskProvider GetDiskEnumeration – type 3

2010-09-01T08:38:33,360Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – DoEnumInstances() returned – 3

2010-09-01T08:38:33,360Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – Call ReturnDone

2010-09-01T08:38:33,360Z Trace      [scx.core.providers.diskprovider:5964:3086830480] BaseProvider::EnumInstances() – return OK

2010-09-01T08:38:33,360Z Trace      [scx.core.provsup.cmpibase.singleprovider.DiskProvider:5964:3086830480] SingleProvider::EnumInstances() – Returning – 0

 

but it still did not give me an idea as to why we would not get data for those “counters”. A this point I stopped using complex troubleshooting techniques and simply turned intuition on, and tried with some help from a search engine: http://www.bing.com/search?q=How+do+I+find+out+Linux+Disk+utilization 

the results I got all mentioned that on Linux you would use the “iostat” command.

So I tried to use and… lol and behold: the iostat commend was NOT INSTALLED on that machine!

Guess what? We installed it (it is included in the “sysstat” package for RedHat linux, so a simple “yum install sysstat” took care of this) and the counters started working!

Hope that is useful to some.

OpsMgr Event IDs Spreadsheet

I work in support (mostly with System Center Operations Manager, as you know), and I work with event logs every day. The following are typical situations:

  1. I get a colleague or a customer telling me “I am having a problem and the SCOM agent is showing 21037 events and 20002 events.  What’s wrong with it?”   
  2. I want to tune an OpsMgr environment and reduce load on the database by turning off a few event collections, as my friend Kevin Holman suggests here http://blogs.technet.com/kevinholman/archive/2009/11/25/tuning-tip-turning-off-some-over-collection-of-events.aspx .
  3. I am analyzing, sorting and grouping Events with Powershell like I have written on my blog lately http://www.muscetta.com/2009/12/16/opsmgr-eventlog-analysis-with-powershell/ but I can’t read those long descriptions properly.
  4. I exported an EVT from a customer environment and I load it on a machine that does not have OpsMgr message DLLs installed – all I see are EventIDs and type (Warning, Error) – but no real description – and I still want to figure out what those events are trying to tell me.

Getting to the point: I, like everyone – don’t have every OpsMgr event memorized.

This is why I thought of building this spreadsheet, and I hope it might come in handy to more people.

The spreadsheet contains an “AllEvents” list – and then the same events are broken down by event source as well:

clip_image002

When you want to search for an events (in one of the situations described above) just open up the spreadsheet, go to the “AllEvents” tab, hit CTRL+F (“Find”) and type in the Event ID you are searching for:

clip_image004

And this will take you to the row containing the event, so you can look up its description:

clip_image006

The description shows the event standard text (which is in the message DLL, therefore is the part you will not see if opening an EVT on another machine that does not have OpsMgr installed), and where the event parameters are (%1, %2, etc – which will be the strings you see in the EVT anyway).

That way you can get an understanding of what the original message would have looked like on the original machine.

This is just one possible usage pattern of this reference. It can also be useful to just read/study the events, learning about new ones you have never encountered, or remembering those you HAVE seen in the past but did not quite remember. And of course you can also find other creative ways to use it.

You can get it from here.

 

A few last words to give due credit: this spreadsheet has been compiled by using Eventlog Explorer (http://blogs.technet.com/momteam/archive/2008/04/02/eventlog-explorer.aspx ) to extract the event information out of the message DLLs on a OpsMgr2007 R2 installation. That info has been then copied and pasted in Excel in order to have an “offline” reference. Also I would like to thank Kevin Holman for pointing me to Eventlog Explorer first, and then for insisting I should not keep this spreadsheet in my drawer, as it could be useful to more people!