Archive for the 'MOM' Category

RSS Feed for the 'MOM' Category

Capturing your knowledge/intelligence should be SIMPLE

Saturday, November 22nd, 2014

Lately this blog has been very personal. This post is about stuff I do at work, so if you are not one of my IT readers, don't worry.

For my IT readers, an interruptions from guitars and music on this blog to share some personal reflection on OpInsights and SCOM.

SCOM is very powerful. You know I have always been a huge fan of 2007 and worked myself on the 2012 release. But, compared to its predecessor – MOM – in SCOM it has always been very hard to author management packs – multiple tools, a lot of documentation… here we are, more than 6 years later, and the first 2 comments on an old post on the momteam blog still strike me hard every time I read it:

whatever happened to click,click,done?

You would think that things have changed, but SCOM is fundamentally complex, and even with the advances in tooling (VSAE, MPAuthor, etc) writing MPs is still black magic, if you ask some users.

I already blogged about me exporting and MP and converting its event-based alerting rules to OpInsights searches.

Well, writing those alerting rules in SCOM needs a lot of complex XML – you might not need to know how to write it (but you often have to attempt dechipering it) and even if you create rules with a wizard, it will produce a lot of complex XML for you.

In the screenshot below, the large XML chunk that is needed to pick up a specific eventId from a specific log and a specific source: the key/important information is only a small fraction of it, while the rest is ‘packaging’:

image

I want OpInsights to be SIMPLE.

If there is one thing I want the most for this project, is this.

That's why the same rule can now be expressed with a simple filter search in OpInsights, where all you need is just that key information

EventID=1037 Source="Microsoft-Windows-IIS-W3SVC" EventLog=System

and you essentially don't have to care about any sort of packaging nor mess with XML.

Click, click – filters/facets in the UI let you refine your criteria. And your saved searches too. And they execute right away, there is not even a ‘Done’ button to press. You might just be watching those searches pinned to tiles in your dashboard. All it took was identify the three key pieces of info, no complex XML wrapping needed!

Ok, granted – there ARE legitimate, more complex, scenarios for which you need complex data sources/collectors and specialized/well thought data shaping, not just events – and we use those powerful capabilities of the MMA agent in intelligence packs. But at its core, the simple search language and explor-ability of the data are meant to bring back SIMPLE to the modern monitoring world. Help us prioritize what data sources you need first!

PS – if you have no idea what I was talking about – thanks for making it till here, but don’t worry: either you are not an IT person, which means simply ignore this; or – if you are an IT person – go check out Azure Operational Insights!

System Center Advisor has kept me busy and you should check it out

Sunday, August 17th, 2014

If you were one of my work/Microsoft-related subscribers or other IT geeks, you might have been disappointed this blog has only had my own songs posted, lately. Yes I know you don’t like them. It’s fine.

In general, I tend to blog work-related stuff at my other MSDN blog or on the MOMteam blog, lately. Also, several folks (in Microsoft, and from outside) have reached and keep reaching out to me for APM-related questions. Sorry, I don't work nor own that feature anymore. In fact I have not really worked on it for over a year. It appears ITPro’s and Dev’s are a still a thing over here.

So I stayed with the ITPro’s, and in the last 16 or so months I have  been busy with System Center Advisor. First small but useful things, then the complete overhaul we did the past May at TechEd North America 2014.

If you have not yet heard about it and have no clue what I am talking about, then you should definitely check it out. See the following resources if you want to learn more of what I am working on:

clip_image001VIDEOS

Advisor Preview 2min Overview Video: http://aka.ms/unrpst

Advisor Preview TechEd announcement Video: http://aka.ms/Aulpqc

Joseph @ The Edge Show showing off our Log Management capabilities http://aka.ms/R4p9d0

Advisor Preview Onboarding Steps Video: http://aka.ms/Lgt2zu 

clip_image002SOCIAL

Advisor Preview Twitter Handle: @mscAdvisor

clip_image003RESOURCES

Advisor Preview Onboarding Documentation: http://aka.ms/Wrbzug

Advisor Preview Troubleshooting blog: http://aka.ms/G04tcq

Advisor Preview Feature requests can me made inside the Advisor portal by clicking the ‘Feedback’ link Advisor Feedback

Operations Manager 2012 Release Candidate is out of the bag!

Thursday, November 10th, 2011

Go read the announcement at http://blogs.technet.com/b/server-cloud/archive/2011/11/10/system-center-operations-manager-2012-release-candidate-from-the-datacenter-to-the-cloud.aspx

This is the first public release since I am part of the team (I started in this role the day after the team had shipped Beta) and this is the first release that contains some direct output of my work. It feels so good!

Documentation has also been refreshed – it starts here http://technet.microsoft.com/en-us/library/hh205987.aspx

The part specifically about the APM feature is here http://technet.microsoft.com/en-us/library/hh457578.aspx

Enjoy!

Does anyone have a new System Center sticker for me?

Saturday, November 27th, 2010

Does anyone have a new System Center sticker?

I got this sticker last APRIL at MMS2010 in JUST ONE COPY, and I waited till I got a NEW laptop in SEPTEMBER to actually use that…
It also took a while to stick it on properly (other than to re-install the PC as I wanted…),  but this week they told me that, for an error, I got given the wrong machine (they did it all themselves, tho – I did not ask for any specific one) and this one needs to be replaced!!!!

This is WORSE than any hardware FAILure, as the machine just works very well and I was expecting to keep it for the next two years :-(

Can anyone be so nice to send me one of those awesome stickers again? :-)

A few thoughts on sizing Audit Collection System

Thursday, March 18th, 2010

People were already collecting logs with MOM, so why not the security log? Some people were doing that, but it did not scale enough; for this reason, a few years ago Eric Fitzgerald announced that he was working on Microsoft Audit Collection System. Anyhow, the tool as it was had no interface… and the rest is history: it has been integrated into System Center Operations Manager. Anyhow, ACS remains a lesser-known component of OpsMgr.

There are a number of resources on the web that is worth mentioning and linking to:

and, of course, many more, I cannot link them all.

As for myself, I have been playing with ACS since those early beta days (before I joined Microsoft and before going back to MOM, when I was working in Security), but I never really blogged about this piece.

Since I have been doing quite a lot of work around ACS lately, again, I thought it might be worth consolidating some thoughts about it, hence this post.

Anatomy of an “Online” Sizing Calculation

What I would like to explain here is the strategy and process I go thru when analyzing the data stored in a ACS database, in order to determine a filtering strategy: what to keep and what not to keep, by applying a filter on the ACS Collector.

So, the first thing I usually start with is using one of the many “ACS sizer” Excel spreadsheets around… which usually tell you that you need more space than it really is necessary… basically giving you a “worst case” scenario. I don’t know how some people can actually do this from a purely theoretical point of view, but I usually prefer a bottom up approach: I look at the actual data that the ACS is collecting without filters, and start from there for a better/more accurate sizing.

In the case of a new install this is easy – you just turn ACS on, set the retention to a few days (one or two weeks maximum), give the DB plenty of space to make sure it will make it, add all your forwarders… sit back and wait.

Then you come back 2 weeks later and start looking at the data that has been collected.

What/How much data are we collecting?

First of all, if we have not changed the default settings, the grooming and partitioning algorithm will create new partitioned tables every day. So my first step is to see how big each “partition” is.

But… what is a partition, anyway? A partition is a set of 4 tables joint together:

  1. dtEvent_GUID
  2. dtEventData_GUID
  3. dtPrincipal_GUID
  4. dtSTrings_GUID

where GUID is a new GUID every day, and of course the 4 tables that make up a daily partition will have the same GUID.

The dtPartition table contains a list of all partitions and their GUIDs, together with their start and closing time.

Just to get a rough estimate we can ignore the space used by the last three tables – which are usually very small – and only use the dtEvent_GUID table to get the number of events for that day, and use the stored procedure “sp_spaceused”  against that same table to get an overall idea of how much space that day is taking in the database.

By following this process, I come up with something like the following:

Partition ID Status Partition Start Time Partition Close Time Rows Reserved  KB Total GB
9b45a567_c848_4a32_9c35_39b402ea0ee2 0 2/1/2010 2:00 2/1/2010 2:00 29,749,366 7,663,488 7,484
8d8c8ee1_4c5c_4dea_b6df_82233c52e346 2 1/31/2010 2:00 2/1/2010 2:00 28,067,438 9,076,904 8,864
34ce995b_689b_46ae_b9d3_c644cfb66e01 2 1/30/2010 2:00 1/31/2010 2:00 30,485,110 9,857,896 9,627
bb7ea5d3_f751_473a_a835_1d1d42683039 2 1/29/2010 2:00 1/30/2010 2:00 48,464,952 15,670,792 15,304
ee262692_beae_4d81_8079_470a54567946 2 1/28/2010 2:00 1/29/2010 2:00 48,980,178 15,836,416 15,465
7984b5b8_ddea_4e9c_9e51_0ee7a413b4c9 2 1/27/2010 2:00 1/28/2010 2:00 51,295,777 16,585,408 16,197
d93b9f0e_2ec3_4f61_b5e0_b600bbe173d2 2 1/26/2010 2:00 1/27/2010 2:00 53,385,239 17,262,232 16,858
8ce1b69a_7839_4a05_8785_29fd6bfeda5f 2 1/25/2010 2:00 1/26/2010 2:00 55,997,546 18,105,840 17,681
19aeb336_252d_4099_9a55_81895bfe5860 2 1/24/2010 2:00 1/24/2010 2:00 28,525,304 7,345,120 7,173
1cf70e01_3465_44dc_9d5c_4f3700dc408a 2 1/23/2010 2:00 1/23/2010 2:00 26,046,092 6,673,472 6,517
f5ec207f_158c_47a8_b15f_8aab177a6305 2 1/22/2010 2:00 1/22/2010 2:00 47,818,322 12,302,208 12,014
b48dabe6_a483_4c60_bb4d_93b7d3549b3e 2 1/21/2010 2:00 1/21/2010 2:00 55,060,150 14,155,392 13,824
efe66c10_0cf2_4327_adbf_bebb97551c93 2 1/20/2010 2:00 1/20/2010 2:00 58,322,217 15,029,216 14,677
0231463e_8d50_4a42_a834_baf55e6b4dcd 2 1/19/2010 2:00 1/19/2010 2:00 61,257,393 15,741,248 15,372
510acc08_dc59_482e_a353_bfae1f85e648 2 1/18/2010 2:00 1/18/2010 2:00 64,579,122 16,612,512 16,223

If you have just installed ACS and let it run without filters with your agents for a couple of weeks, you should get some numbers like those above for your “couple of weeks” of analysis. If you graph your numbers in Excel (both size and number of rows/events per day) you should get some similar lines that show a pattern or trend:

Trend: Space user by day

Trend: Number of events by day

So, in my example above, we can clearly observe a “weekly” pattern (monday-to-friday being busier than the weekend) and we can see that – for that environment – the biggest partition is roughly 17GB. If we round this up to 20GB – and also considering the weekends are much quieter – we can forecast 20*7 = 140GB per week. This has an excess “buffer” which will let the system survive event storms, should they happen. We also always recommend having some free space to allow for re-indexing operations.

In fact, especially when collecting everything without filters, the daily size is a lot less predictable: imagine worms “trying out” administrator account’s passwords, and so on… those things can easily create event storms.

Anyway, in the example above, the customer would have liked to keep 6 MONTHS (180days) of data online, which would become 20*180 = 3600GB = THREE TERABYTE and a HALF! Therefore we need a filtering strategy – and badly – to reduce this size.

[edited on May 7th 2010 – if you want to automate the above analysis and produce a table and graphs like those just shown, you should look at my following post.]

Filtering Strategies

Ok, then we need to look at WHAT actually comprises that amount of events we are collecting without filters. As I wrote above, I usually run queries to get this type of information.

I will not get into HOW TO write a filter here – a collector’s filter is a WMI notification query and it is already described pretty well elsewhere how to configure it.

Here, instead, I want to walk thru the process and the queries I use to understand where the noise comes from and what could be filtered – and get an estimate of how much space we could be saving if filter one way or another.

Number of Events per User

–event count by User (with Percentages)
declare @total float
select @total = count(HeaderUser) from AdtServer.dvHeader
select count(HeaderUser),HeaderUser, cast(convert(float,(count(HeaderUser)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by HeaderUser
order by count(HeaderUser) desc

In our example above, over the 14 days we were observing, we obtained percentages like the following ones:

#evt HeaderUser Account Percent
204,904,332 SYSTEM 40.79 %
18,811,139 LOCAL SERVICE 3.74 %
14,883,946 ANONYMOUS LOGON 2.96 %
10,536,317 appintrauser 2.09 %
5,590,434 mossfarmusr

Just by looking at this, it is pretty clear that filtering out events tracked by the accounts “SYSTEM”, “LOCAL SERVICE” and “ANONYMOUS”, we would save over 45% of the disk space!

Number of Events by EventID

Similarly, we can look at how different Event IDs have different weights on the total amount of events tracked in the database:

–event count by ID (with Percentages)
declare @total float
select @total = count(EventId) from AdtServer.dvHeader
select count(EventId),EventId, cast(convert(float,(count(EventId)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by EventId
order by count(EventId) desc

We would get some similar information here:

Event ID Meaning Sum of events Percent
538 A user logged off 99,494,648 27.63
540 Successful Network Logon 97,819,640 27.16
672 Authentication Ticket Request 52,281,129 14.52
680 Account Used for Logon by (Windows 2000) 35,141,235 9.76
576 Specified privileges were added to a user's access token. 26,154,761 7.26
8086 Custom Application ID 18,789,599 5.21
673 Service Ticket Request 10,641,090 2.95
675 Pre-Authentication Failed 7,890,823 2.19
552 Logon attempt using explicit credentials 4,143,741 1.15
539 Logon Failure – Account locked out 2,383,809 0.66
528 Successful Logon 1,764,697 0.49

Also, do not forget that ACS provides some report to do this type of analysis out of the box, even if for my experience they are generally slower – on large datasets – than the queries provided here. Also, a number of reports have been buggy over time, so I just prefer to run queries and be on the safe side.

Below an example of such report (even if run against a different environment – just in case you were wondering why the numbers were not the same ones :-)):Event Counts ACS Default Report

The numbers and percentages we got from the two queries above should already point us in the right direction about what we might want to adjust in either our auditing policy directly on Windows and/or decide if there is something we want to filter out at the collector level (here you should ask yourself the question: “if they aren’t worth collecting are they worth generating?” – but I digress).

Also, a permutation of the above two queries should let you see which user is generating the most “noise” in regards to some events and not other ones… for example:

–event distribution for a specific user (change the @user) – with percentages for the user and compared with the total #events in the DB
declare @user varchar(255)
set @user = 'SYSTEM'
declare @total float
select @total = count(Id) from AdtServer.dvHeader
declare @totalforuser float
select @totalforuser = count(Id) from AdtServer.dvHeader where HeaderUser = @user
select count(Id), EventID, cast(convert(float,(count(Id)) / convert(float,@totalforuser) * 100) as decimal(10,2)) as PercentageForUser, cast(convert(float,(count(Id)) / (convert(float,@total)) * 100) as decimal(10,2)) as PercentageTotal
from AdtServer.dvHeader
where HeaderUser = @user
group by EventID
order by count(Id) desc

The above is particularly important, as we might want to filter out a number of events for the SYSTEM account (i.e. logons that occur when starting and stopping services) but we might want to keep other events that are tracked by the SYSTEM account too, such as an administrator having wiped the Security Log clean – which might be something you want to keep:

Event ID 517 Audit Log was cleared

of course the amount of EventIDs 517 over the total of events tracked by the SYSTEM account will not be as many, and we can still filter the other ones out.

Number of Events by EventID and by User

We could also combine the two approaches above – by EventID and by User:

select count(Id),HeaderUser, EventId

from AdtServer.dvHeader

group by HeaderUser, EventId

order by count(Id) desc

This will produce a table like the following one

SQL Query: Events by EventID and by User

which can be easily copied/pasted into Excel in order to produce a pivot Table:

Pivot Table

Cluster EventLog Replication

One more aspect that is less widely known, but I think is worth showing, is the way that clusters behave when in ACS. I don’t mean all clusters… but if you keep the “eventlog replication” feature of clusters enabled (you should disable it also from a monitoring perspective, but I digress), each cluster node’s security eventlog will have events not just for itself, but for all other nodes as well.

Albeit I have not found a reliable way to filter out – other than disabling eventlog replication altogether.

Anyway, just to get an idea of how much this type of “duplicate” events weights on the total, I use the following query, that tells you how many events for each machine are tracked by another machine:

–to spot machines that are cluster nodes with eventlog repliation and write duplicate events (slow)

select Count(Id) as Total,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',") as ForwarderMachine, EventMachine

from AdtServer.dvHeader

–where ForwarderMachine <> EventMachine

group by EventMachine,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',")

order by ForwarderMachine,EventMachine

Cluster Events

Those presented above are just some of the approaches I usually look into at first. Of course there are a number more. Here I am including the same queries already shown in action, plus a few more that can be useful in this process.

I have even considered building a page with all these queries – a bit like those that Kevin is collecting for OpsMgr (we actually wrote some of them together when building the OpsMgr Health Check)… shall I move the below queries on such a page? I though I’d list them here and give some background on how I normally use them, to start off with.

Some more Useful Queries

–top event ids
select count(EventId), EventId
from AdtServer.dvHeader
group by EventId
order by count(EventId) desc

–event count by ID (with Percentages)
declare @total float
select @total = count(EventId) from AdtServer.dvHeader
select count(EventId),EventId, cast(convert(float,(count(EventId)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by EventId
order by count(EventId) desc

–which machines have ever written event 538
select distinct EventMachine, count(EventId) as total
from AdtServer.dvHeader
where EventID = 538
group by EventMachine

–machines
select * from dtMachine

–machines (more readable)
select replace(right(Description, (len(Description) – patindex('%\%',Description))),'$',")
from dtMachine

–events by machine
select count(EventMachine), EventMachine
from AdtServer.dvHeader
group by EventMachine

–rows where EventMachine field not available (typically events written by ACS itself for chekpointing)
select *
from AdtServer.dvHeader
where EventMachine = 'n/a'

–event count by day
select convert(varchar(20), CreationTime, 102) as Date, count(EventMachine) as total
from AdtServer.dvHeader
group by convert(varchar(20), CreationTime, 102)
order by convert(varchar(20), CreationTime, 102)

–event count by day and by machine
select convert(varchar(20), CreationTime, 102) as Date, EventMachine, count(EventMachine) as total
from AdtServer.dvHeader
group by EventMachine, convert(varchar(20), CreationTime, 102)
order by convert(varchar(20), CreationTime, 102)

–event count by machine and by date (distinuishes between AgentMachine and EventMachine
select convert(varchar(10),CreationTime,102),Count(Id),EventMachine,AgentMachine
from AdtServer.dvHeader
group by convert(varchar(10),CreationTime,102),EventMachine,AgentMachine
order by convert(varchar(10),CreationTime,102) desc ,EventMachine

–event count by User
select count(Id),HeaderUser
from AdtServer.dvHeader
group by HeaderUser
order by count(Id) desc

–event count by User (with Percentages)
declare @total float
select @total = count(HeaderUser) from AdtServer.dvHeader
select count(HeaderUser),HeaderUser, cast(convert(float,(count(HeaderUser)) / (convert(float,@total)) * 100) as decimal(10,2))
from AdtServer.dvHeader
group by HeaderUser
order by count(HeaderUser) desc

–event distribution for a specific user (change the @user) – with percentages for the user and compared with the total #events in the DB
declare @user varchar(255)
set @user = 'SYSTEM'
declare @total float
select @total = count(Id) from AdtServer.dvHeader
declare @totalforuser float
select @totalforuser = count(Id) from AdtServer.dvHeader where HeaderUser = @user
select count(Id), EventID, cast(convert(float,(count(Id)) / convert(float,@totalforuser) * 100) as decimal(10,2)) as PercentageForUser, cast(convert(float,(count(Id)) / (convert(float,@total)) * 100) as decimal(10,2)) as PercentageTotal
from AdtServer.dvHeader
where HeaderUser = @user
group by EventID
order by count(Id) desc

–to spot machines that write duplicate events (such as cluster nodes with eventlog replication enabled)
select Count(Id),EventMachine,AgentMachine
from AdtServer.dvHeader
group by EventMachine,AgentMachine
order by EventMachine

–to spot machines that are cluster nodes with eventlog repliation and write duplicate events (better but slower)
select Count(Id) as Total,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',") as ForwarderMachine, EventMachine
from AdtServer.dvHeader
–where ForwarderMachine <> EventMachine
group by EventMachine,replace(right(AgentMachine, (len(AgentMachine) – patindex('%\%',AgentMachine))),'$',")
order by ForwarderMachine,EventMachine

–which user and from which machine is target of elevation (network service doing "runas" is a 552 event)
select count(Id),EventMachine, TargetUser
from AdtServer.dvHeader
where HeaderUser = 'NETWORK SERVICE'
and EventID = 552
group by EventMachine, TargetUser
order by count(Id) desc

–by hour, minute and user
–(change the timestamp)… this query is useful to search which users are active in a given time period…
–helpful to spot "peaks" of activities such as password brute force attacks, or other activities limited in time.
select datepart(hour,CreationTime) as Hours, datepart(minute,CreationTime) as Minutes, HeaderUser, count(Id) as total
from AdtServer.dvHeader
where CreationTime < '2010-02-22T16:00:00.000'
and CreationTime > '2010-02-22T15:00:00.000'
group by datepart(hour,CreationTime), datepart(minute,CreationTime),HeaderUser
order by datepart(hour,CreationTime), datepart(minute,CreationTime),HeaderUser

Using the SCX Agent with WSMan from Powershell v2

Monday, June 1st, 2009

So Powershell v2 adds a nice bunch of Ws-Man related cmdlets. Let’s see how we can use them to interact with OpenPegasus’s WSMan on a SCX Agent.

PS C:\maint> test-wsman -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

image

But we do get this error:

Test-WSMan : The server certificate on the destination computer (virtubuntu.huis.dom:1270) has the following errors:
The SSL certificate could not be checked for revocation. The server used to check for revocation might be unreachable.

The SSL certificate is signed by an unknown certificate authority.
At line:1 char:11
+ test-wsman <<<<  -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl
+ CategoryInfo          : InvalidOperation: (:) [Test-WSMan], InvalidOperationException
+ FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand

The credentials above have to be a unix login. Which we typed correctly. But we still can't get thru, as the certificate used by the agent is not trusted by our workstation. This seems to be the “usual” issue I first faced when testing SCX with WINRM in beta1. At the time I simply dismissed it with the following sentence

[…] Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM. […]

and I sincerely thought that it would explain pretty well… but eventually a lot of people got confused by this and did not know what to do, especially for the part that goes about trusting the certificate.  Anyway, in the following posts I figured out you could pass the –skipCACheck parameter to WINRM… which solved the issue with having to trust the certificate (which is fine for testing, but I would not use that for automations and scripts running in production… as it might expose your credentials to man-in-the-middle attacks).

So it seems that with the Powershell cmdlets we are back to that issue, as I can’t find a parameter to skip the CA check. Maybe it is there, but with PSv2 not having been released yet, I don't know everything about it, and the CTP documentation is not yet complete. Therefore, back to trusting the certificate.

Trusting the certificate is actually very simple, but it can be a bit tricky when passing those certs back and forth from unix to windows. So let's make the process a bit clearer.

All of the SCX-agents certificates are ultimately signed by a key on the Management server that has discovered them, but I don't currently know where that certificate/key is stored on the management server. Anyway, you can get it from the agent certificate – as you only really need the public key, not the private signing key.

Use WinSCP or any other utility to copy the certificate off one of the agents.
You can find that in the /etc/opt/microsoft/scx/ssl location:

image

that scx-host-computername.pem is your agent certificate.

Copy it to the Management server and change its extension from .pem to .cer. Now Windows will be happy to show it to you with the usual Certificate interface:

image

We need to go to the “Certification Path” tab, select the ISSUER certificate (the one called “SCX-Certificate”):

image

then go to the “Details” tab, and use the “Copy to File” button to export the certificate.

After you have the certificate in a .CER file, you can add it to the “trusted root certification authorities” store on the computer you are running your powershell tests from.

image

So after you have trusted it, the same command as above actually works now:

PS C:\maint> test-wsman -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

wsmid           : http://schemas.dmtf.org/wbem/wsman/identify/1/wsmanidentity.xsd
lang            :
ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
ProductVendor   : Microsoft System Center Cross Platform
ProductVersion  : 1.0.4-248

Ok, we can talk to it! Now we can do something funnier, like actually returning instances and/or calling methods:

PS C:\maint> Get-WSManInstance -computer virtubuntu.huis.dom -authentication basic -credential (get-credential) -port 1270 -usessl -enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx

image

This is far from exhaustive, but should get you started on a world of possibilities about automating diagnostics and responses with Powershell v2 towards the OpsMgr 2007 R2 Cross-Platform machines. Enjoy!

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Installing the OpsMgr 2007 R2 SCX Agent on Ubuntu

Saturday, May 30th, 2009

You know since the beta1 of Xplat I have been busy with modifying the Redhat management pack and monitor CentOS with OpsMgr. Now, CentOS is a distribution that is pretty similar to RedHat, so the RPM package just runs, and it is only a matter of hacking a modified MP.

I never went really further in my experiments, mostly due to lack of time… but then yesterday I got a comment to this older post asking about Ubuntu. Of course I know about Ubuntu, and have been using Debian-based distributions for years. I actually even prefer them over RPM-based distributions such as RedHat or SuSE (personal preference). Heck, even this weblog is running on Debian!

Anyway, I never really tried to see if one of the existing RPM packages for RedHat or SuSE could be modified to run on Ubuntu. I will eventually test this on Debian too, but for now I used Ubuntu which tends to have slightly newer packages and libraries, overall. The machine I tested on is a Ubuntu Server 8.04.2. Older/newer versions might slightly differ.

BEWARE THAT ALL THAT FOLLOWS BELOW IS NOT SUPPORTED BY MICROSOFT. It is only described here for EXPERIMENTAL (==fun) purpose. DO NOT USE THIS IN A PRODUCTION ENVIRONMENT.

So, you are warned. Now let’s hack it.

The first thing to do is to copy the Redhat agent’s RPM package off your OpsMgr2007 R2 server in the “usual” path “C:Program FilesSystem Center Operations manager 2007AgentManagementUnixAgents”. Let’s grab the RHEL5 agent, which is called scx-1.0.4-248.rhel.5.x86.rpm in R2 RTM.

First we need to CONVERT the RPM package to the DEB package format used by Ubuntu, by using the ALIEN package:

sudo apt-get update
sudo apt-get install alien
sudo bash
alien -k scx-1.0.4-248.rhel.5.x86.rpm –scripts
dpkg -i scx_1.0.4-248_i386.deb

image

The converted package will install… but the script execution will fail in a few places – most notably in the generation of the certificate, as it is not able to locate the right openssl libraries, as shown in the screenshot above.

If the libssl.so.6 file cannot be found, you might be missing the “libssl-dev” package, which you can install as follows:

apt-get install libssl-dev

But even if it is installed, you will find that the files are still missing. This is not really true: actually, the files are there, but on Ubuntu they have a different name than on RedHat, that’s all. You can therefore create hardlinks to the “right” files, so that they are aliased and get found afterwards:

cd /usr/lib
ln -s libcrypto.so.0.9.8 libcrypto.so.6
ln -s libssl.so.0.9.8 libssl.so.6

So now when installing the package, the certificate generation will work:

image

You are nearly ready to go. You have to start the service by using the init scripts – the “service” command is RedHat-specific, that will still fail.

/etc/init.d/scx-cimd start is the “standard” way of starting daemons from init on Unix.

But it still fails, as it seems that the init script provided in the RedHat package is really searching for a file called “functions” which is present on RedHat and on CentOS, which provides re-usable functions for startup scripts to include:

image

How do you fix this? I just copied the /etc/init.d/functions file from a CentOS box to my Ubuntu box.

I copied it via SCP from the CentOS box I have:

cd /etc/init.d

scp root@centos.huis.dom:/etc/init.d/functions .

You can probably also find and fetch the file from the Internet (both CentOS and RedHat should have accessible repositories with all the files in their distributions, since it is open sourced).

After you have the file in place, the init script will be able to include it, will find the functions it needs, and the daemon/service will now start (even if with minor errors I have not investigated for now, but that don’t seem to be causing troubles):

image

and here you can see it is finally running:

image

So let’s try to issue a few queries as shown in a previous posts:

image

IT WORKS!!!

But… there is a “but”: not all classes actually return instances and values just yet. Most notably the “SCX_OperatingSystem” class does not seem to return anything right awy. That is a very important class, because is the one we would use to first discover the Operating System object in the Management Packs. So we need to fix it. The reason why the class does not return anything, is that the SCX provider is looking into the /etc/redhat-release file to return what OS version/distribution the machine is running. And the file is obviously not there on Ubuntu.

On all Linuxes there is a similar file, called /etc/issue… which again, we can copy with the other name and trick the provider into working:

cd /etc

cp issue redhat-release

And NOW, the SCX_OperatingSystem Class also returns an instance:

image

The next step would be “cooking” an MP to discover Ubuntu. More on this on a later post (maybe). I did not test all classes and their implementation… you can try to poke at them by following the instructions and commands on my previous post here. But this should get you started.

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Early Adoptions, Health Checks and New Year Rants.

Tuesday, December 30th, 2008

Generations

Two days ago I read the following Tweet by Hugh MacLeod:

"[…] Early Adopter Problem: How to differentiate from the bandwagon, once the bandwagon starts moving faster than you are […]"

That makes me think of early adoption of a few technologies I have been working with, and how the community around those evolved. For example:

Operations Manager… early adoption meant that I have been working with it since the beta, had posted one of the earliest posts about how to use a script in a Unit Monitor back in may 2007 (the product was released in April 2007 and there was NO documentation back then, so we had to really try to figure out everything…), but someone seems to think it is worth repeating the very same lesson in November 2008, with not a lot of changes, as I wrote here. I don't mean being rude to Anders… repeating things will surely help the late adopters finding the information they need, of course.

Also, I started playing early with Powershell. I posted my first (and only) cmdlet back in 2006. It was not a lot more than a test for myself to learn how to write one, but that's just to say that I started playing early with it. I have been using it to automate tasks for example.

Going back to the quote above, everyone gets on the bandwagon posting examples and articles. I had been asked a few times about writing articles on OpsMgr and Powershell usage (for example by www.powershell.it) but I declined, as I was too busy using this knowledge to do stuff for work (where “work” is defined as in “work that pays your mortgage”), rather than seeking personal prestige through articles and blogs. Anyway, that kind of articles are appearing now all over the Internet and the blogosphere now. The above examples made me think of early adoption, and the bandwagon that follows later on… but even as an early adopter, I was never very noisy or visible.

Now, going back to what I do for work, (which I mentioned here and here in the past), I work in the Premier Field Engineering organization of Microsoft Services, which provides Premier services to customers. Microsoft Premier customer have a wide range of Premier agreement features and components that they can use to support their people, improve their processes, and improve the productive use of the Microsoft technology they have purchased. Some of these services we provide are known to the world as “Health Checks”, some as “Risk Assessment Programs” (or, shortly, RAPs). These are basically services where one of our technology experts goes on the customer site and there he uses a custom, private Microsoft tool to gather a huge amount of data from the product we mean to look at (be it SQL, Exchange, AD or anything else….). The Health Check or RAP tool collects the data and outputs a draft of the report that will be delivered to the customer later on, with all the right sections and chapters. This is done so that every report of the same kind will look consistent, even if the engagement is performed by a different engineer in a different part of the world. The engineer will of course analyze the collected data and write recommendations about what is configured properly and/or about what could or should be changed and/or improved in the implementation to make it adhere to Best Practices. To make sure only the right people actually go onsite to do this job we have a strict internal accreditation process that must be followed; only accredited resources that know the product well enough and know exactly how to interpret the data that the tool collects are allowed to use it and to deliver the engagement, and present/write the findings to the customer.

So why am I telling you this here, and how have I been using my early knowledge of OpsMgr and Powershell for ?

I have used that to write the Operations Manager Health Check, of course!

We had a MOM 2005 Health Check already, but since the technology has changed so much, from MOM to OpsMgr, we had to write a completely new tool. Jeff  (the original MOM2005 author, who does not have a blog that I can link to) and me are the main coders of this tool… and the tool itself is A POWERSHELL script. A longish one, of course (7000 lines, more or less), but nothing more than a Powershell script, at the end of the day. There are a few more colleagues that helped shape the features and tested the tool, including Kevin Holman. Some of the database queries on Kevin’s blog are in fact what we use to extract some of the data (beware that some of those queries have recently been updated, in case you saved them and using your local copy!), while some other information are using internal and/or custom queries. Some other times we use OpsMgr cmdlets or go to the SDK service, but a lot of times we query the database directly (we really should use the SDK all the times, but for certain stuff direct database access is way faster). It took most of the past year to write it, test it, troubleshoot it, fix it, and deliver the first engagements as “beta” to some customers to help iron out the process… and now the delivery is available! If a year seems like a long time, you have to consider this is all work that gets done next to what we all have to normally do with customers, not replacing it (i.e. I am not free to sit on my butt all day and just write the tool… I still have to deliver services to customers day in day out, in the meantime).

Occasionally, during this past calendar year, that is approaching its end, I have been willing and have found some extra time to disclose some bits and pieces, techniques and prototypes of how to use Powershell and OpsMgr together, such as innovative ways to use Powershell in OpsMgr against beta features, but in general most of my early adopter’s investment went into the private tool for this engagement, and that is one of the reasons I couldn’t blog or write much about it, being it Microsoft Intellectual Property.

But it is also true that I did not care to write other stuff when I considered it too easy or it could be found in the documentation. I like writing of ideas, thoughts, rants OR things that I discover and that are not well documented at the time I study them… so when I figure out things I might like leaving a trail for some to follow. But I am not here to spoon feed people like some in the bandwagon are doing. Now the bandwagon is busy blogging and writing continuously about some aspect of OpsMgr (known or unknown, documented or not), and the answer to the original question of Hugh is, in my opinion, that it does not really matter what the bandwagon is doing right now. I was never here to do the same thing. I think that is my differentiator. I am not saying that what a bunch of colleagues and enthusiasts is doing is not useful: blogging and writing about various things they experiment with is interesting and it will be useful to people. But blogs are useful until a certain limit. I think that blogs are best suited for conversations and thoughts (rather than for "howto's"), and what I would love to see instead is: less marketing hype when new versions are announced and more real, official documentation.

But I think I should stop caring about what the bandwagon is doing, because that's just another ego trip at the end of the day. What I should more sensibly do, would be listening to my horoscope instead:

[…] "How do you slay the dragon?" journalist Bill Moyers asked mythologist Joseph Campbell in an interview. By "dragon," he was referring to the dangerous beast that symbolizes the most unripe and uncontrollable part of each of our lives. In reply to Moyers, Campbell didn't suggest that you become a master warrior, nor did he recommend that you cultivate high levels of sleek, savage anger. "Follow your bliss," he said simply. Personally, I don't know if that's enough to slay the dragon — I'm inclined to believe that you also have to take some defensive measures — but it's definitely worth an extended experiment. Would you consider trying that in 2009? […]

CentOS discovery in OpsMgr2007 R2 beta

Sunday, November 23rd, 2008

Here we go again. Now that the OpsMgr2007 R2 beta is out, with an improved and revamped version of the System Center Cross Platform Extensions, I faced the issue of how to upgrade my test lab.

I have to say that OpsMgr2007 R2 beta release notes explain the known issues, and I had no trouble whatsoever upgrading the windows part. It just took its time (I am running virtual machines in my test lab, that don't have the best performance), but it went smoothly and without a glitch. In a couple of hours I had everything upgraded: databases, RMS, reporting, agents, gateway. All right then. The new purple icons in System Center look cute, and the new UI has some great stuff, such as a long-awaited way to update your management packs directly from the Internet, better display of Overrides (kind of what we used to rely on Override Explorer for)… and  A LOT more new stuff that I won't be wasting my Sunday writing about since everybody else has already done it two days ago:

opsmgr aggregated feed on Twitter

Therefore let's get back to my upgrade, which is a lot more interesting (to me) than the marketing tam-tam :-)

As part of the upgrade to R2, I had to first uninstall the Xplat beta refresh bits, which I had installed, including all Unix Management Packs. Including my CentOS Management Pack I had improvised.

So this is the new start page of the integrated Discovery Wizard:

Discovery Wizard

Looks nice and integrates the functionality of discovering and deploying Windows machines, SNMP Devices, and Unix/Linux machines.

Of course, my CentOS machine would not be discovered, and showed up as an unsupported platform. Of course my old Management Pack I had hacked together in XPlat Beta 1 did not work anymore. Therefore, I figured out I had to see what changes were there, and how to make it work again (of course it IS possible – It is NOT SUPPORTED, but I don't care, as long as it works).

Since the existing agent could not be discovered, the first step I took was logging on the Linux box, un-install the old agent, and install the new one:

XPlat Agent RPM Install on CentOS

There I tried to discover again, but of course it still failed.

At that point I started taking a look at the new layout of things on the unix side. Most stuff is located in the same directories where beta1 was installed, and there are a bunch of useful commands under /opt/microsoft/scx/bin/tools.
You can check out the Open Pegasus version used:

[root@centos tools]# ./scxcimconfig –version
Version 2.7.0

Let's take a look at what SCX classes we have available:

./scxcimcli nc -n root/scx -di |grep SCX | sort

./scxcimcli nc -n root/scx -di |grep SCX | sort

Nice. That's the stuff we will be querying over WS-Man from the Management Server.

So let's look at the OS Discovery, and we test it from the OpsMgr 2007 box:

winrm enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx -username:root -password:password -r:https://centos:1270/wsman -auth:basic -skipCACheck

it returns results:

OS WS-Man Query

At first I assumed this worked like in Beta1, therefore I exported RedHat management pack and I made my own version of it, replacing the strings it is expecting to find to discover CentOS instead than Redhat.

While the MP was syntactically correct and would import fine, the Discovery wizard still didn't work.

I took one more look at the discoveries in the MP, and I found there are two more, targeted to Management Server, which is probably what gets used by the Discovery Wizard to understand what kind of agent kit needs to be deployed.

MP XML - Discoveries

So basically this discovery checks for the returned value from the module to determine if the discovered platform is a supported one:

Discovery Settings

But how does the module get its data?

Look at the layout of the /AgentManagement/UnixAgents folder on the Management Server:

/AgentManagement/unixAgents

That's it: GetOSVersion.sh – a shell script. A nice, open, clear text, hackable shell script. Let's take a look at it:

Discovery Script Hack

So that's it, and how my modification looks like. What happens during the discovery wizard is that we probably copy the script over SCP to the box, execute it, look at a number of things, and return the discovery data we need.

If you do those steps manually, you see how the script returns something very similar to a PropertyBag, just like discoveries done by VBScript on Windows machines:

Discovery Script Output

So after modifying the script… here we go. The Wizard now thinks CentOS is Red Hat, and can install an agent on it:

Discovery Wizard

Deploying Agent

Only when the Management Server discovery finally considers the CentOS machine worth managing, then the other discoveries that use WS-Man queries start kicking in, like the old one did, and find the OS objects and all the other hosted objects. In order for this to work you don't only need to hack the shell script, but to have a hacked MP – the "regular" Red Har one won't find CentOS, which is and remains an UNSUPPORTED platform.

CentOS Health Model

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

Protecting custom Resolution State in OpsMgr 2007

Saturday, September 13th, 2008

In System Center Operations Manager 2007, you can add and remove resolution states for your alerts at will. Other than states "0" ("New") and "255" ("Closed") you can create other 254 resolution states to suit your needs. This is a simple feature that was already present in previous MOM versions, and it is very useful to do a kind of tricks with your alerts. The amount of possible states you can create should be able to satisfy any kind of alert and incident management process you might have in place, and any kind of filtering or forwarding or escalation need you might want to perform by using resolution states.

image

By default, only OpsMgr Administrators can change these settings, with the exception of the two built-in states of "New" and "Closed": those two states are REQUIRED if you want the product to continue working, therefore the GUI won't let you change, edit or delete them. Which is good.

This is not true for your own resolution states, which can be edited or even deleted any time. All that is really saved in an alert when you change an alert's resolution state is the NUMBER associated with it. In fact you even use that number when querying for alerts in the Command Shell:

get-alert | where {$_.resolutionstate -eq 0}

That means that if by accident you delete a resolution state you have defined, you won't see its description anymore in the GUI. Also, if you try to re-organize your resolution state, you can easily change the IDs for existing ones… Sure, you need to have the permissions in order to change or delete them, but what if you have implemented your important Alert and Incident management process by using resolution states and you want a bit of extra protection from mistakes or unintended deletion for them?

Then you can protect them by making the product think they were "built-in" too, just like "New" and "Closed".

How would you do this? In an UNSUPPORTED WAY: editing the database :-) In fact, those resolution states are written in a table in the database, called "ResolutionState" (who would have guessed it?), that looks like the following picture:

dbo.ResolutionState

Can you see the "IsPredefined" column? That can be set to "True" or "False" and that value is used by the SDK service to tell the GUI if that Resolution State can be edited/deleted or not.

Of course changing the database directly IS NOT SUPPORTED by Microsoft. You do this at your own risk, and if it was me, I would *NEVER* touch, change or remove the default two states ("New" and "Closed") as THAT really would BREAK the product. For example, Alerts that are not set to "Closed" (255) won't be ever groomed. And that is VERY BAD. NEVER, NEVER DO THAT.

On the other end, changing a custom Resolution State to make the product believe it is Predefined/Built-in has not had any negative impact in my (limited) testing so far, and has added the advantage of "protecting" my resolution state from unintended deletion, as shown below:

image

As usual, do this at your own risk. Remember what's written in my Disclaimer:

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MICROSOFT, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS HACK.

CentOS 5 Management Pack for OpsMgr SCX

Tuesday, May 13th, 2008

As I mentioned here, I have been testing the SCX beta.

Not having one of the "supported" platforms pushed me into playing with the provided Management Packs, and in turn I managed to use the MP for Red Hat Enterprise Linux 5 as a base, and replaced a couple of strings in the discoveries in order to get a working CentOS 5 Management Pack.

CentOS_HealthExplorer01_NEW

I still have not looked into the "hardware" monitors and health model / service model, so those are not currently monitored. But it is a start.

A lot of people have asked me a lot of information and would like to get the file – both in the blog's comment, on the newsgroup, or via mail. I am sorry, but I cannot provide you with the file, because it has not been throughly tested and might render your systems unstable, and also because there might be licensing and copyright issues that I have not checked within Microsoft.

Keep also in mind that using CentOS as a monitored platform is NOT a SUPPORTED scenario/platform for SCX. I only used it because I did not have a Suse or Redhat handy that day, and because I wanted to understand how the Management Packs using WS-Man worked.

This said, should you wish to try to do the same "MP Hacking" I did,  I pretty much explained all you need to know in my previous post and its comments, so that should not be that difficult.

Actually, I still think that the best way to figure out how things are done is by looking at the actual implementation, so I encourage you to look at the management packs and figure out how those work. There are a few mature tools out there that will help you author/edit Management Packs if you don't want to edit the XML directly: the Authoring Console, and Silect MP Studio Lite, for example. If you want to delve in the XML details, instead, then I suggest you read the Authoring Guide and peek at Steve Wilson's AuthorMPs.com site.

Disclaimer
The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS PROGRAM.

Testing System Center Cross Plaform Extentions

Sunday, May 4th, 2008

I am testing the beta bits of the cross-platform extensions that were released on Microsoft Connect 

This post wants to describe my limited testing so far – I hope this can benefit/help everyone testing the beta for some stuff that might currently not be incredibly clear – unless you attended the MMS class, at least :-))

I started out with the White Paper that has been posted on the web, which describes the architecture pretty well, but from a higher level (with diagrams and the like). Then I downloaded the beta bits, which contain another document about setting the thing up. It is pretty well done, to be honest (especially if you consider that it is beta documentation for a beta product!), but it does not really go all the way down to troubleshooting things a lot, yet. I will try to cover some of that here.

I installed the agent manually – it’s just a RPM package, not much that can go wrong with that. There is a reason why I did not use the push discovery and deployment of the agent, which you will figure out reading later on. Once installed, I tried to figure out how things were looking like on the linux machine. It is all pretty understandable, after all, if you look around on the machine (documented or not, linux and open source stuff is easy to figure out by reading configuration files and the like, and by searching on the web).

Basically the “agent” is not properly an "agent" the way the windows agent is, since it does not really "sends" stuff to the Management Server on its own: It consists of a  couple of services/daemons, based on existing opensource projects, but configured in their own folder, with their own name, and using different ports than a standard install of those,  not to conflict with possible existing ones on those machines.

The Management Service uses these services remotely (similar to doing agentless monitoring towards a windows box) using these services. The two services are:

 scx-services commands

It is easy to figure out how they are layed out. Even if undocumented, you look at the processes

SCX processes

and you can figure out WHERE they live (/opt/microsoft/scx/bin/….) and where their configuration files are located (/etc/opt/microsoft/scx/conf …).

SCX Configuration

The files are self explanatory, and the documentation of the opensource projects can be found on the Internet: 

for wsmand

for cimd

 

I still have to delve into them properly as I would like to, but I already figured out a bunch of interesting things by quickly looking at them.

Agent Communication someone must have decided to “recycle” the 1270 port number that was used in MOM2005 :-) Basically openwsman listens as a SSL listener (with basic auth – connected via PAM module with the “regular” unix /etc/passwd users, so you can authenticate as those without having to define specific users for the service). So all that happens is that the Management Server asks things/executes WS-Man queries and commands on this channel. The Management Server connects every time to the agent on port 1270 using SSL, authenticates as “root” (or as the specified "Action Account") and does its stuff, or asks the agent to do it. So the communication is happening from the Management Server to the agent… not the other way around like it happens with Windows "agents". That’s why it feels to me more like an “agentless” thing, at least for what concerns the “direction” of traffic and who does the actual querying.

For the rest, the provided Management Packs have “normal” discoveries and “normal” monitors. Pretty much like the Windows Management Packs often discover thing by querying WMI, here they use WS-Man to run CIM queries against the Unix boxes.

The Service Model is totally cool to actually *SEE* in action, don’t you think so ?

Service Model

 

A few more debugging/troubleshooting information:

I searched a bit and found the openwsman.org documentation and forum to be useful to figure some things out. For example I banged my head a few times before managing to actually TEST a query from windows to linux using WINRM. This document helped a lot.

Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM.

For example, this is how I tested what the discovery for a Linux RedHat Computer type should be returning (I read that by opening the MP in authoring console, as one would usually do for any MP):

winrm enumerate http://schemas.microsoft.com/wbem/wscim/1/cim-schema/2/SCX_OperatingSystem?__cimnamespace=root/scx -username:root -password:password -r:https://centos:1270/wsman -auth:basic

If you need to test the query directly *ON* the linux box (querying the CIMD instead than WSMAND), the WBEMEXEC utility is packaged with the agent (under /opt/microsoft/scx/bin/tools ). It is not as easy as some windows administrators (that have used WBEMTEST or WMI Tools in the past) would hope, but not even that bad. Just to run a few queries to the CIM daemon locally it is not really interactive, so you need to create a XML file that looks like the following (basically you build the RAW request the way the CIMD accepts it):

 

 

<?xml version="1.0" ?>

<CIM CIMVERSION="2.0" DTDVERSION="2.0">

<MESSAGE ID="50000" PROTOCOLVERSION="1.0">

<SIMPLEREQ>

<IMETHODCALL NAME="EnumerateInstanceNames">

<LOCALNAMESPACEPATH>

<NAMESPACE NAME="root"/>

<NAMESPACE NAME="scx"/>

</LOCALNAMESPACEPATH>

<IPARAMVALUE NAME="ClassName">

<CLASSNAME NAME="SCX_OperatingSystem"/>

</IPARAMVALUE>

</IMETHODCALL>

</SIMPLEREQ>

</MESSAGE>

</CIM>

 

 

Once you have made such a file, you can execute the query in the file with the tool like the following:

./wbemexec -d2 query.xml

 

As you can see from here, CIMD uses HTTP already. This differs from Windows' WMI that uses RPC/DCOM. In a way, this is much simpler to troubleshoot, and more firewall-friendly.

 

I have not really found an activity or debug log for any of those components, yet… but in the end they are not doing anything ON THEIR OWN, unless asked by the MS…. So the “healthservice” logic is all on the MS anyway. Errors about failed discoveries, permissions of the Action Account user, and anything else will be logged by the HealthService on the Windows machine (the Management Server) that is actually performing monitoring towards the Unix box.

It really is *just* getting the WMI and WinRM-equivalent layer on linux/Unix up and running– after that, everything is done from windows anyway!

After this common management infrastructure has been provided, 3rd parties will be facilitated in writing *just* MPs, without having to worry about the TRANSPORT of information anymore.

 

As you have probably noticed from the screenshots and commandlines, I don’t have a “real” Redhat Enterprise Linux or “supported” linux distribution… Therefore I started my testing using CentOS 5 (which is very similar to RHEL 5) – the agent installed fine as you can see, but I was not getting anything really “discovered” – the MP had only found a “linux computer” but was not finding any “RedHat” or “SuSe” or any other "Operating System" instances… and if you are somewhat familiar with the way Operations Manager targeting works, you would understand that monitors are targeted at object classes. If I don't have any instance of those objects being discovered, NO MONITORING actually happens, even if the infrastructure is in place and the pieces are talking to each other:

 CentOS not discovered

Therefore my machine was not being monitored.

In the end, I actually even got it to work, but I had to create a new Management Pack (exporting and modifying the RHEL5 one as a base) that would actually search for different Property values and discover CentOS instead as if it were RedHat:

CentOS Discovered 

After importing my hacked Management Pack the machine started to be monitored. Here you can see Health Explorer in all of its glory:

image008

Of course this is a hack I made just to have a test setup somewhat working and to familiarize myself with the SCX components. It is not guaranteed that my Management pack actually works on CentOS the way it is supposed to work and that there aren't other – more subtle – differences between RedHat and CentOS that will make it fail. I only modified a couple of Discoveries to let it discover the "Operating System" instance… everything else should follow, but not necessarily. One difference you see already in the screenshot above is that I am not yet seeing the hardware being monitored, so my hack is already only partially working and it is definitely something that won't be supported, so I cannot provide it here. Also, this is a beta, so I I think that the Management Packs will be re-released with following beta versions, and this change is something that would need to be re-done all over again. Also, the unsupported distribution is the reason why I installed the agent manually in the first place, as the "Discovery Wizard" would not really "agree" to go and let me install the agent remotely on an unsupported "platform!".

But I could not wait to see this working, while waiting two business days (we are on a weekend!) for confirmation that I am allowed to actually download a 30-day-unsupported-Trial of the "real" RedHat Enteprise Linux, so I cheated :-)

 

 

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION.

Looking at OpsMgr2007 Alert trend with Command Shell

Friday, January 25th, 2008

It's friday night, I am quite tired and I can't be asked of writing a long post. But I have not written much all week, not even updated my Twitter, and now I want to finish the week with at least some goodies. So this is the turn of a couple of Powershell commands/snippets/scripts that will count alerts and events generated each day: this information could help you understand the trends of events and alerts over time in a Management Group. It is nothing fancy at all, but they can still be useful to someone out there. In the past (MOM 2005) I used to gather this kind of information with SQL Queries against the operations database. But now, with Powershell, everything is exposed as objects and it is much easier to get information without really getting your hands dirty with the database :-)

#Number of Alerts per day

$alerttimes = Get-Alert | Select-Object TimeRaised
$array=@()

foreach ($datetime in $alerttimes){
$array += $datetime.timeraised.date
}

$array | Group-Object Date

#Number of Events per day

$eventtimes = Get-Event | Select-Object TimeGenerated
$array=@()

foreach ($datetime in $eventtimes){
$array += $datetime.timegenerated.date
}

$array | Group-Object Date

Beware that these "queries" might take a long time to execute (especially the events one) depending on the amount of data and your retention policy.

This is of course just scratching the surface of the amount of amazing things you can do with Powershell in Operations Manager 2007. For this kind of information you might want to keep an eye on the official "System Center Operations Manager Command Shell" blog: http://blogs.msdn.com/scshell/

Simply Works

Thursday, December 27th, 2007

Simply Works

Simply Works, uploaded by Daniele Muscetta on Flickr.

I don't know about other people, but I do get a lot to think when the end of the year approaches: all that I've done, what I have not yet done, what I would like to do, and so on…

And it is a period when memories surface.

I found the two old CD-ROMs you can see in the picture. And those are memories.
missioncritical software was the company that invented a lot of stuff that became Microsoft's products: for example ADMT and Operations Manager.

The black CD contains SeNTry, the "enterprise event manager", what later became Operations Manager.
On the back of the CD, the company motto at the time: "software that works simply and simply works".
So true. I might digress on this concept, but I won't do that right now.

I have already explained in my other blog what I do for work. Well, that was a couple of years ago anyway. Several things have changed, and we are moving towards offering services that are more measurable and professional. So, since it happens that in a certain job you need to be an "expert" and "specialize" in order to be "seen" or "noticed".
You know I don't really believe in specialization. I have written it all over the place. But you need to make other people happy as well and let them believe what they want, so when you "specialize" they are happier. No, really, it might make a difference in your carrer :-)

In this regard, I did also mention my "meeting again" with Operations Manager.
That's where Operations manager helped me: it let me "specialize" in systems and applications management… a field where you need to know a bit of everything anyway: infrastructure, security, logging, scripting, databases, and so on… :-)
This way, everyone wins.

Don't misunderstand me, this does not mean I want to know everything. One cannot possibly know everything, and the more I learn the more I believe I know nothing at all, to be honest. I don't know everything, so please don't ask me everything – I work with mainframes :-)
While that can be a great excuse to avoid neighbours and relatives annoyances with their PCs though, on the serious side I still believe that any intelligent individual cannot be locked into doing a narrow thing and know only that one bit just because it is common thought that you have to act that way.

If I would stop where I have to stop I would be the standard "IT Pro". I would be fine, sure, but I would get bored soon. I would not learn anything. But I don't feel I am the standard "IT Pro". In fact, funnily enough, on some other blogs out there I have been referenced as a "Dev" (find it on your own, look at their blogrolls :-)). But I am not a Dev either then… I don't write code for work. I would love to, but I rarely actually do, other than some scripts. Anyway, I tend to escape the definition of the usual "expert" on something… mostly because I want to escape it. I don't see myself represented by those generalization.

As Phil puts it, when asked "Are software developers – engineers or artists?":

"[…] Don’t take this as a copout, but a little of both. I see it more as craftsmanship. Engineering relies on a lot of science. Much of it is demonstrably empirical and constrained by the laws of physics. Software is less constrained by physics as it is by the limits of the mind. […]"

Craftmanship. Not science.
And stop calling me an "engineer". I am not an engineer. I was even crap in math, in school!

Anyway, what does this all mean? In practical terms, it means that in the end, wether I want it or not, I do get considered an "expert" on MOM and OpsMgr… and that I will mostly work on those products for the next year too. But that is not bad, because, as I said, working on that product means working on many more things too. Also, I can point to different audiences: those believing in "experts" and those going beyond schemes. It also means that I will have to continue teaching a couple of scripting classes (both VBScript and PowerShell) that nobody else seems to be willing to do (because they are all *expert* in something narrow), and that I will still be hacking together my other stuff (my facebook apps, my wordpress theme and plugins, my server, etc) and even continue to have strong opinions in those other fields that I find interesting and where I am not considered an *expert* 😉

Well, I suppose I've been ranting enough for today…and for this year :-)
I really want to wish everybody again a great beginning of 2008!!! What are you going to be busy with, in 2008 ?

Monitoring Syslog with OpsMgr 2007

Friday, November 9th, 2007

I had missed it… finally guidance on how to collect and monitor UNIX syslog in System Center Operations Manager 2007 has been published!

This is much more sysadmin-oriented than what was availble before (that remais of course still relevant, but more from a Management Pack developer's point of view, who wants to know how things work "behind the hood").

Create a Script-Based Unit Monitor in OpsMgr2007 via the GUI

Thursday, May 10th, 2007

Warning for people who landed here: this post is VERY OLD. It was written in the early days of struggling with OpsMgr 2007, and when nobody really knew how to do things.
I found that this way was working – and it surely does – but what is described here is NOT the recommended way to do things nowadays. This post was only meant to fill in a gap I was feeling existed, back in 2007.
But as time passes, and documentation gets written, knowledge improves.
Therefore, I recommend you read the newly released Composition chapter of the MP Authoring Guide instead
http://technet.microsoft.com/en-us/library/ff381321.aspx – and start building your custom modules to embed scripts as Brian Wren describes in there, so that you can share them between multiple rules and monitors.

This said, below is the original post.

Create a Script-Based Unit Monitor in OpsMgr2007 via the GUI

There is not a lot of documentation for System Center Operations Manager 2007 yet.
It is coming, but there's a lot of things that changed since the previous release and I think some more would only help. Also, a lot of the content I am seeing is either too newbie-oriented or too developer-oriented, for some reason.

I have not yet seen a tutorial, webcast or anything that explains how to create a simple unit monitor that uses a VBS script using the GUI.

So this is how you do it:

Go to the "Authoring" space of OpsMgr 2007 Operations Console.
Select the "Management Pack objects", then "Monitors" node. Right click and choose "Create a monitor" -> "Unit Monitor".

You get the "Create a monitor" wizard open:
wizard02

Choose to create a two-states unit monitor based on a script. Creating a three- state monitor would be pretty similar, but I'll show you the most simple one.
Also, choose a Management pack that will contain your script and unit monitor, or create a new management pack.
wizard03

Choose a "monitor target" (object classes or instances – see this webcast about targeting rules and monitors: www.microsoft.com/winme/0703/28666/Target_Monitoring_Edit… ) and the aggregate rollup monitor you want to roll the state up to.

Choose a schedule, that is: how often would you like your script to run. For demonstration purposes I usually choose a very short interval such a two or three minutes. For production environments, tough, choose a longer time range.
wizard04

Choose a name for your script, complete with a .VBS extension, and write the code of the script in the rich text box:
wizard05

As the sample code and comments suggest, you should use a script that checks for the stuff you want it to check, and returns a "Property Bag" that can be later interpreted by OpsMgr workflow to change the monitor's state.
This is substantially different than scripting in MOM 2005, where you could only launch scripts as responses, loosing all control over their execution.

For demonstration purpose, use the following script code:
 

On Error Resume Next
Dim oAPI, oBag
Set oAPI = CreateObject("MOM.ScriptAPI")
Set oBag = oAPI.CreateTypedPropertyBag(StateDataType)
Const FOR_APPENDING = 8
strFileName = "c:\testfolder\testfile.txt"
strContent = "test "
Set objFS = CreateObject("Scripting.FileSystemObject")
Set objTS = objFS.OpenTextFile(strFileName,FOR_APPENDING)
If Err.Number <> 0 Then
Call oBag.AddValue("State","BAD")
Else
Call oBag.AddValue("State","GOOD")
objTS.Write strContent
End If
Call oAPI.Return(oBag)

[edited on 29th of May as pointed out by Ian: if you cut and paste the example script you might need to change the apostrophes (“) as that causes the script to fail when run – it is an issue with the template of this blog.] [edited on 30th of May: I fixed the blog so that now post content shows just plain, normal double quotes instead than fancy ones. It seems like a useful thing when from time to time I post code…]

The script will try to write into the file c:\testfolder\testfile.txt.
If it finds the file and manages to write (append text) to it, it will return the property "State" with a value of "GOOD".
If it fails (for example if the file does not exist), it will return the property "State" with a value of "BAD".

In MOM 2005 you could only let script generate Events or Alerts directly as a mean to communicate their results back to the monitoring engine. In OpsMgr 2007 you can let your script spit out a property bag and then continue the monitoring workflow and decide what to do depending on the script's result.

wizard06

So the next step is to go and check for the value of the property we return in the property bag, to determine which status the monitor will have to assume.

We use the syntax Property[@Name='State'] in the parameter field, and we search for a content that means an unhealthy condition:

wizard07

Or for the healty one:
wizard08

Then we decide which status will the monitor have to assume in the healty and unhealty conditions (Green/Yellow or Green/Red usually)
wizard09

Optionally, we can decide to raise an Alert when the status changes to unhealthy, and close it again when it goes back to healty.

wizard10

Now our unit monitor is done.
All we have to do is waiting it gets pushed down to the agent(s) that should execute it, and wait for its status to change.
In fact it should go to the unhealthy state first.
To test that it works, just create the text file it will be searching for, and wait for it to run again, and the state should be reset to Healthy.

Have fun with more complex scripts!

MOM2005 vs. OpsMgr2007 and ITIL ?

Friday, April 27th, 2007

 

MOM has always been a great tool out of the box because it sort of FORCED you to implement an Incident Management Process to deal with Alerts, as described here:
http://ianblythmanagement.wordpress.com/2006/07/27/mom-2005-and-itil-part-1/
In fact, Alerts had to be actually set to "Resolved", and this had to be done manually.

I have now been wondering for a while: "How is OpsMgr2007 going to affect this?" I refer to the fact that now OpsMgr2007 does something customers have been asking for a while: it can auto-resolve alerts as soon as the incident/issue is fixed, by monitoring the state of the component rather than waiting for people to resolve it!

Practically, people were often the bottleneck, due to a missing Incident Management Process. MOM has tried for nearly 8 years to push them to implement one… and I feel that it finally gave up even trying.

All the other stuff described in the other two articles of Ian'serie do still apply.

For Capacity Management nothing substantially changes.
Availability Management is greatly improved, with the generic "availability report" and the state roll-up feature provided by the new Health Service and the new ways object are discovered and instantiated and the way their health models work.

Problem Management can also still be done, and Alert tuning will be still required (but it should be slightly easier now, with the improved "overrides" kind of thing).
Service Level Management can also be done – this will actually be done much better: if the system knows you've fixed the incident and it closes the alert for you, SLA calculations will be done on the REAL down/up-times of services, not on people keeping the Alerts open forever like I have seen in many places.
This means it will be done better, WITHOUT relying on people.

All in all there are substantial changes in OpsMgr2007, most of them are for the good…. but still, I think, I will be missing the fact that people have to actively look at their consoles and manage Alerts the way they were asked to do before. I will miss all the talks I used to do about "you HAVE to manage your Alerts/Incidents", now.

MOM 2005 Alerts to RSS feed

Thursday, March 22nd, 2007

I am an RSS Addict, you know that.So I wanted an RSS Feed to show MOM Alerts. I have been thinking of it for a while, last year (or was it the year before?).
It seemed like a logical thing to me: alerts are created (and can be resolved – that is, expire), generally get sorted by the date and the time when they have been created, the look pretty much like a list. Also, many people like to receive mail notification when new alerts are generated.
So, if the alert can be sent to you (push), you could also get to it(pull).
Pretty much the same deal with receiving a mail or reading a newsgroup, or syndicating a feed.

At the time I looked around but it seemed like no one had something like this already done.
So I wrote a very simple RSS feed generator for MOM Alerts.
I did it quite an amount of time ago, just as an exercise.
Then, after a while, I figured out that the MOM 2005 Resource Kit had been updated to include such a utility!

Wow, I thought, they finally added what I have been thinking for a while. Might it be because I mentioned it on an private Mailing list ? Maybe. Maybe not. Who cares. Of course, if it is included in the resource kit it must be way cooler than the one I made, I though.
I really thought something along these lines, but never actually had the time to try it out.
I think I just sort of assumed it must have been cooler than the one I made, since it was part of an official package, while I am not a developer. So I basically forgot about the one I wrote, dismissing it as being crap without looking too much into it anymore.
Until today.
Today I actually tried to use the alert to RSS tool included in the resource kit, because a customer asked if there was any other way to get notified, other than receiving notification or using the console (or the console notifier).
So I looked at the resource kit's Alert-to-RSS Utility.
My experience with it:
1) it is provided in source code form – which is ok if it was ALSO provided as source. Instead it is ONLY provided as source, and most admins don't have Visual Studio installed or don't know how to compile from the command line;
2) Even when they wanted to compile it, it includes a bug which makes it impossible to compile – solution in this newsgroup discussion;
3) if you don't want to mess about with code since you are using a resource Kit tool (as opposed to something present in the SDK) you can even get it already compiled by someone from somewhere on the net, but that choice is about trust.

Anyway, one way or another, after it is finally set up…. surprise surprise!!!
It does NOT show a LIST of alerts (as I was expecting).
It shows a summary of how many alerts you have. basically it is an RSS feed made of a single item, and this single item tells you how many alerts you have. What is one supposed to do with such a SUMMARY? IMHO, it is useless the way it is. It is even worse than one of those feed that only contains the excerpt of the article, rather than the full article.
Knowing that I have 7 critical errors and 5 warning without actually knowing ANYTHING of them is pointless.
It might be useful for a manager, but not for a sysadmin, at least.

So I thought my version, even if coded crap, might be useful to someone because it gives you a list of alerts (those that are not resolved) and each one of them tells you the description of the alert, the machine tat generated it, and includes links to the actual alert in the web console, so you can click, go there, and start troubleshooting from within your aggregator!
My code does this. Anyway, since I am a crap coder, since I wrote it in only fifteen minutes more than a year ago, and since I don't have time to fix it and make it nicer… it has several issues, and could be improved in a million ways, in particular for the following aspects:

  1. is currently depends on the SDK Database views – it could use the MOM Server API's or the webservice instead;
  2. it uses SQL Security to connect to the DB – by default MOM does not allow this – it is suggested for the SQL instance hosting "OnePoint" to only use Windows Integrated Authentication.. so to make my code work you have to switch back to Mixed mode, and create a login in SQL that has permission to read the database. This is due to the fact that I've coded this in five minutes and I don't know how to use delegation – if I was able to use delegation, I would… so that the end user accessing IIS would be the one connecting to the DB. If anybody wants to teach me how to do this, I will be most grateful.
  3. it could accept parameters as URL variables, so to filter out only events for a specific machine, or a specific resolution state, etc etc
  4. At present it uses RSS.Net to generate the feed. It could made independent from it, but I don't really see why, and I quite like that library.

The code is just an ASP.Net page and its codebehind, no need to compile, but of course you need to change a couple of lines to match your webconsole address.
Also, you need to get RSS.NET and copy its library (RSS.Net.dll) in the /bin subfolder of the website directory where you place the RSSFeed generator page. I see that I wrote this with version 0.86, but any version should do, really.

Here is what it will look like:

AlertToRSS

And here's the code of the page (two files):

Default.aspx

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>

Default.aspx.cs

using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Web;
using Rss;

public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string webconsoleaddress = "http://192.168.0.222:1272/AlertDetail.aspx?v=a&sid=" // must change to match your address

// Inizializza il Feed
RssChannel rssChannel = new RssChannel();
rssChannel.Title = "MOM Alerts"
rssChannel.PubDate = DateTime.Now;
rssChannel.Link = new Uri("http://192.168.0.222:1272/rss/"); // must change to match your address
rssChannel.LastBuildDate = DateTime.Now;
rssChannel.Description = "Contains the latest Alerts"

// query – you might want to change the severity
string mySelectQuery = "SELECT ComputerName, Name, Severity, TimeRaised, RepeatCount, GUID FROM dbo.SDKAlertView WHERE Severity > 10 AND ResolutionState < 255"

// SQL Connection – must change SQL server, user name and password
SqlConnection conn = new SqlConnection("Data Source=192.168.0.222;Initial Catalog=OnePoint;User ID=rss;Password=rss");
SqlDataReader rdr = null;

try
{
conn.Open();
SqlCommand cmd = new SqlCommand(mySelectQuery, conn);
rdr = cmd.ExecuteReader();
while (rdr.Read())
{
RssItem rssItem = new RssItem();
string titleField = rdr[1].ToString();
rssItem.Title = titleField;
string url = webconsoleaddress + rdr[5];
rssItem.Link = new Uri(url.ToString());
string description = "<![CDATA[ <p><a xhref=\"" + rssItem.Link + "\">" + rdr[1] + " </a></p><br>" + "<br>Computer: " + rdr[0] + "<br>Repeat Count: " + rdr[4] + "<BR>Original ALert Time: " + rdr[3];
rssItem.Description = description;
rssChannel.Items.Add(rssItem);
}

// Finalizza il feed
RssFeed rssFeed = new RssFeed();
rssFeed.Channels.Add(rssChannel);
Response.ContentType = "text/xml"
Response.ExpiresAbsolute = DateTime.MinValue;
rssFeed.Write(Response.OutputStream);
}
finally
{
if (rdr != null)
{
rdr.Close();
}

if (conn != null)
{
conn.Close();
}
}
}
}