Three quarters of 2015, my IT career and various ramblings

September is over. The first three quarters of 2015 are over.
This has been a very important year so far – difficult, but revealing. Everything has been about change, healing and renewal.

We moved back to Europe first, and you might have now also read my other post about leaving Microsoft, more recently.

This was a hard choice – it took many months to reach the conclusion this is what I needed to do.

Most people have gone thru strong programming: they think you have to be ‘successful’ at something. Success is externally defined, anyhow (as opposed to satisfaction which we define ourselves) and therefore you are supposed to study in college a certain field, then use that at work to build your career in the same field… and keep doing the same thing.

I was never like that – I didn’t go to college, I didn’t study as an ‘engineer’. I just saw there was a market opportunity to find a job when I started, studied on the job, eventually excelled at it. But it never was *the* road. It just was one road; it has served me well so far, but it was just one thing I tried, and it worked out.
How did it start? As a pre-teen, I had been interested in computers, then left that for a while, did ‘normal’ high school (in Italy at the time, this was really non-technological), then I tried to study sociology for a little bit – I really enjoyed the Cultural Anthropology lessons there, and we were smoking good weed with some folks outside of the university, but I really could not be asked to spend the following 5 or 10 years or my life just studying and ‘hanging around’ – I wanted money and independence to move out of my parent’s house.

So, without much fanfare, I revived my IT knowledge: upgraded my skill from the ‘hobbyist’ world of the Commodore 64 and Amiga scene (I had been passionate about modems and the BBS world then), looked at the PC world of the time, rode the ‘Internet wave’ and applied for a simple job at an IT company.

A lot of my friends were either not even searching for a job, with the excuse that there weren’t any, or spending time in university, in a time of change, where all the university-level jobs were taken anyway so that would have meant waiting even more after they had finished studying… I am not even sure they realized this until much later.
But I just applied, played my cards, and got my job.

When I went to sign it, they also reminded me they expected hard work at the simplest and humblest level: I would have to fix PC’s, printers, help users with networking issues and tasks like those – at a customer of theirs, a big company.
I was ready to roll up my sleeves and help that IT department however I would be capable of, and I did.
It all grew from there.

And that’s how my IT career started. I learned all I know of IT on the job and by working my ass off and studying extra hours and watching older/more expert colleagues and making experience.

I am not an engineer.
I am, at most, a mechanic.
I did learn a lot of companies and the market, languages, designs, politics, the human and technical factors in software engineering and the IT marketplace/worlds, over the course of the past 18 years.

But when I started, I was just trying to lend a honest hand, to get paid some money in return – isn’t that what work was about?

Over time IT got out of control. Like Venom, in the Marvel comics, that made its appearance as a costume that SpiderMan started wearing… and it slowly took over, as the ‘costume’ was in reality some sort of alien symbiotic organism (like a pest).

You might be wondering what I mean. From the outside I was a successful Senior Program Manager of a ‘hot’ Microsoft product.
Someone must have mistaken my diligence and hard work for ‘talent’ or ‘desire of career’ – but it never was.
I got pushed up, taught to never turn down ‘opportunities’.

But I don’t feel this is my path anymore.
That type of work takes too much metal energy off me, and made me neglect myself and my family. Success at the expense of my own health and my family’s isn’t worth it. Some other people wrote that too – in my case I stopped hopefully earlier.

So what am I doing now?

First and foremost, I am taking time for myself and my family.
I am reading (and writing)
I am cooking again
I have been catching up on sleep – and have dreams again
I am helping my father in law to build a shed in his yard
We bought a 14-years old Volkswagen van that we are turning into a Camper
I have not stopped building guitars – in fact I am getting setup to do it ‘seriously’ – so I am also standing up a separate site to promote that activity
I am making music and discovering new music and instruments
I am meeting new people and new situations

There’s a lot of folks out there who either think I am crazy (they might be right, but I am happy this way), or think this is some sort of lateral move – I am not searching for another IT job, thanks. Stop the noise on LinkedIn please: I don’t fit in your algorithms, I just made you believe I did, all these years.

Capturing your knowledge/intelligence should be SIMPLE

Lately this blog has been very personal. This post is about stuff I do at work, so if you are not one of my IT readers, don’t worry.

For my IT readers, an interruptions from guitars and music on this blog to share some personal reflection on OpInsights and SCOM.

SCOM is very powerful. You know I have always been a huge fan of 2007 and worked myself on the 2012 release. But, compared to its predecessor – MOM – in SCOM it has always been very hard to author management packs – multiple tools, a lot of documentation… here we are, more than 6 years later, and the first 2 comments on an old post on the momteam blog still strike me hard every time I read it:

whatever happened to click,click,done?

You would think that things have changed, but SCOM is fundamentally complex, and even with the advances in tooling (VSAE, MPAuthor, etc) writing MPs is still black magic, if you ask some users.

I already blogged about me exporting and MP and converting its event-based alerting rules to OpInsights searches.

Well, writing those alerting rules in SCOM needs a lot of complex XML – you might not need to know how to write it (but you often have to attempt dechipering it) and even if you create rules with a wizard, it will produce a lot of complex XML for you.

In the screenshot below, the large XML chunk that is needed to pick up a specific eventId from a specific log and a specific source: the key/important information is only a small fraction of it, while the rest is ‘packaging’:

image

I want OpInsights to be SIMPLE.

If there is one thing I want the most for this project, is this.

That’s why the same rule can now be expressed with a simple filter search in OpInsights, where all you need is just that key information

EventID=1037 Source=”Microsoft-Windows-IIS-W3SVC” EventLog=System

and you essentially don’t have to care about any sort of packaging nor mess with XML.

Click, click – filters/facets in the UI let you refine your criteria. And your saved searches too. And they execute right away, there is not even a ‘Done’ button to press. You might just be watching those searches pinned to tiles in your dashboard. All it took was identify the three key pieces of info, no complex XML wrapping needed!

Ok, granted – there ARE legitimate, more complex, scenarios for which you need complex data sources/collectors and specialized/well thought data shaping, not just events – and we use those powerful capabilities of the MMA agent in intelligence packs. But at its core, the simple search language and explor-ability of the data are meant to bring back SIMPLE to the modern monitoring world. Help us prioritize what data sources you need first!

PS – if you have no idea what I was talking about – thanks for making it till here, but don’t worry: either you are not an IT person, which means simply ignore this; or – if you are an IT person – go check out Azure Operational Insights!

Does anyone have a new System Center sticker for me?

Does anyone have a new System Center sticker?

I got this sticker last APRIL at MMS2010 in JUST ONE COPY, and I waited till I got a NEW laptop in SEPTEMBER to actually use that…
It also took a while to stick it on properly (other than to re-install the PC as I wanted…),  but this week they told me that, for an error, I got given the wrong machine (they did it all themselves, tho – I did not ask for any specific one) and this one needs to be replaced!!!!

This is WORSE than any hardware FAILure, as the machine just works very well and I was expecting to keep it for the next two years 🙁

Can anyone be so nice to send me one of those awesome stickers again? 🙂

Early Adoptions, Health Checks and New Year Rants.

Generations

Two days ago I read the following Tweet by Hugh MacLeod:

“[…] Early Adopter Problem: How to differentiate from the bandwagon, once the bandwagon starts moving faster than you are […]”

That makes me think of early adoption of a few technologies I have been working with, and how the community around those evolved. For example:

Operations Manager… early adoption meant that I have been working with it since the beta, had posted one of the earliest posts about how to use a script in a Unit Monitor back in may 2007 (the product was released in April 2007 and there was NO documentation back then, so we had to really try to figure out everything…), but someone seems to think it is worth repeating the very same lesson in November 2008, with not a lot of changes, as I wrote here. I don’t mean being rude to Anders… repeating things will surely help the late adopters finding the information they need, of course.

Also, I started playing early with Powershell. I posted my first (and only) cmdlet back in 2006. It was not a lot more than a test for myself to learn how to write one, but that’s just to say that I started playing early with it. I have been using it to automate tasks for example.

Going back to the quote above, everyone gets on the bandwagon posting examples and articles. I had been asked a few times about writing articles on OpsMgr and Powershell usage (for example by www.powershell.it) but I declined, as I was too busy using this knowledge to do stuff for work (where “work” is defined as in “work that pays your mortgage”), rather than seeking personal prestige through articles and blogs. Anyway, that kind of articles are appearing now all over the Internet and the blogosphere now. The above examples made me think of early adoption, and the bandwagon that follows later on… but even as an early adopter, I was never very noisy or visible.

Now, going back to what I do for work, (which I mentioned here and here in the past), I work in the Premier Field Engineering organization of Microsoft Services, which provides Premier services to customers. Microsoft Premier customer have a wide range of Premier agreement features and components that they can use to support their people, improve their processes, and improve the productive use of the Microsoft technology they have purchased. Some of these services we provide are known to the world as “Health Checks”, some as “Risk Assessment Programs” (or, shortly, RAPs). These are basically services where one of our technology experts goes on the customer site and there he uses a custom, private Microsoft tool to gather a huge amount of data from the product we mean to look at (be it SQL, Exchange, AD or anything else….). The Health Check or RAP tool collects the data and outputs a draft of the report that will be delivered to the customer later on, with all the right sections and chapters. This is done so that every report of the same kind will look consistent, even if the engagement is performed by a different engineer in a different part of the world. The engineer will of course analyze the collected data and write recommendations about what is configured properly and/or about what could or should be changed and/or improved in the implementation to make it adhere to Best Practices. To make sure only the right people actually go onsite to do this job we have a strict internal accreditation process that must be followed; only accredited resources that know the product well enough and know exactly how to interpret the data that the tool collects are allowed to use it and to deliver the engagement, and present/write the findings to the customer.

So why am I telling you this here, and how have I been using my early knowledge of OpsMgr and Powershell for ?

I have used that to write the Operations Manager Health Check, of course!

We had a MOM 2005 Health Check already, but since the technology has changed so much, from MOM to OpsMgr, we had to write a completely new tool. Jeff  (the original MOM2005 author, who does not have a blog that I can link to) and me are the main coders of this tool… and the tool itself is A POWERSHELL script. A longish one, of course (7000 lines, more or less), but nothing more than a Powershell script, at the end of the day. There are a few more colleagues that helped shape the features and tested the tool, including Kevin Holman. Some of the database queries on Kevin’s blog are in fact what we use to extract some of the data (beware that some of those queries have recently been updated, in case you saved them and using your local copy!), while some other information are using internal and/or custom queries. Some other times we use OpsMgr cmdlets or go to the SDK service, but a lot of times we query the database directly (we really should use the SDK all the times, but for certain stuff direct database access is way faster). It took most of the past year to write it, test it, troubleshoot it, fix it, and deliver the first engagements as “beta” to some customers to help iron out the process… and now the delivery is available! If a year seems like a long time, you have to consider this is all work that gets done next to what we all have to normally do with customers, not replacing it (i.e. I am not free to sit on my butt all day and just write the tool… I still have to deliver services to customers day in day out, in the meantime).

Occasionally, during this past calendar year, that is approaching its end, I have been willing and have found some extra time to disclose some bits and pieces, techniques and prototypes of how to use Powershell and OpsMgr together, such as innovative ways to use Powershell in OpsMgr against beta features, but in general most of my early adopter’s investment went into the private tool for this engagement, and that is one of the reasons I couldn’t blog or write much about it, being it Microsoft Intellectual Property.

But it is also true that I did not care to write other stuff when I considered it too easy or it could be found in the documentation. I like writing of ideas, thoughts, rants OR things that I discover and that are not well documented at the time I study them… so when I figure out things I might like leaving a trail for some to follow. But I am not here to spoon feed people like some in the bandwagon are doing. Now the bandwagon is busy blogging and writing continuously about some aspect of OpsMgr (known or unknown, documented or not), and the answer to the original question of Hugh is, in my opinion, that it does not really matter what the bandwagon is doing right now. I was never here to do the same thing. I think that is my differentiator. I am not saying that what a bunch of colleagues and enthusiasts is doing is not useful: blogging and writing about various things they experiment with is interesting and it will be useful to people. But blogs are useful until a certain limit. I think that blogs are best suited for conversations and thoughts (rather than for “howto’s”), and what I would love to see instead is: less marketing hype when new versions are announced and more real, official documentation.

But I think I should stop caring about what the bandwagon is doing, because that’s just another ego trip at the end of the day. What I should more sensibly do, would be listening to my horoscope instead:

[…] “How do you slay the dragon?” journalist Bill Moyers asked mythologist Joseph Campbell in an interview. By “dragon,” he was referring to the dangerous beast that symbolizes the most unripe and uncontrollable part of each of our lives. In reply to Moyers, Campbell didn’t suggest that you become a master warrior, nor did he recommend that you cultivate high levels of sleek, savage anger. “Follow your bliss,” he said simply. Personally, I don’t know if that’s enough to slay the dragon — I’m inclined to believe that you also have to take some defensive measures — but it’s definitely worth an extended experiment. Would you consider trying that in 2009? […]

Simply Works

Simply Works

I don’t know about other people, but I do get a lot to think when the end of the year approaches: all that I’ve done, what I have not yet done, what I would like to do, and so on…
And it is a period when memories surface.

I found the two old CD-ROMs you can see in the picture. And those are memories.
missioncritical software was the company that invented a lot of stuff that became Microsoft’s products: for example ADMT and Operations Manager.

The black CD contains SeNTry, the “enterprise event manager”, what later became Operations Manager.
On the back of the CD, the company motto at the time: “software that works simply and simply works”.
So true. I might digress on this concept, but I won’t do that right now.

I have already explained in my other blog what I do for work. Well, that was a couple of years ago anyway. Several things have changed, and we are moving towards offering services that are more measurable and professional. So, since it happens that in a certain job you need to be an “expert” and “specialize” in order to be “seen” or “noticed”.
You know I don’t really believe in specialization. I have written it all over the place. But you need to make other people happy as well and let them believe what they want, so when you “specialize” they are happier. No, really, it might make a difference in your carrer 🙂

In this regard, I did also mention my “meeting again” with Operations Manager.
That’s where Operations manager helped me: it let me “specialize” in systems and applications management… a field where you need to know a bit of everything anyway: infrastructure, security, logging, scripting, databases, and so on… 🙂
This way, everyone wins.

Don’t misunderstand me, this does not mean I want to know everything. One cannot possibly know everything, and the more I learn the more I believe I know nothing at all, to be honest. I don’t know everything, so please don’t ask me everything – I work with mainframes 🙂
While that can be a great excuse to avoid neighbours and relatives annoyances with their PCs though, on the serious side I still believe that any intelligent individual cannot be locked into doing a narrow thing and know only that one bit just because it is common thought that you have to act that way.

If I would stop where I have to stop I would be the standard “IT Pro”. I would be fine, sure, but I would get bored soon. I would not learn anything. But I don’t feel I am the standard “IT Pro”. In fact, funnily enough, on some other blogs out there I have been referenced as a “Dev” (find it on your own, look at their blogrolls :-)). But I am not a Dev either then… I don’t write code for work. I would love to, but I rarely actually do, other than some scripts. Anyway, I tend to escape the definition of the usual “expert” on something… mostly because I want to escape it. I don’t see myself represented by those generalization.

As Phil puts it, when asked “Are software developers – engineers or artists?”:

“[…] Don’t take this as a copout, but a little of both. I see it more as craftsmanship. Engineering relies on a lot of science. Much of it is demonstrably empirical and constrained by the laws of physics. Software is less constrained by physics as it is by the limits of the mind. […]”

Craftmanship. Not science.
And stop calling me an “engineer”. I am not an engineer. I was even crap in math, in school!

Anyway, what does this all mean? In practical terms, it means that in the end, wether I want it or not, I do get considered an “expert” on MOM and OpsMgr… and that I will mostly work on those products for the next year too. But that is not bad, because, as I said, working on that product means working on many more things too. Also, I can point to different audiences: those believing in “experts” and those going beyond schemes. It also means that I will have to continue teaching a couple of scripting classes (both VBScript and PowerShell) that nobody else seems to be willing to do (because they are all *expert* in something narrow), and that I will still be hacking together my other stuff (my facebook apps, my wordpress theme and plugins, my server, etc) and even continue to have strong opinions in those other fields that I find interesting and where I am not considered an *expert* 😉

Well, I suppose I’ve been ranting enough for today…and for this year 🙂
I really want to wish everybody again a great beginning of 2008!!! What are you going to be busy with, in 2008 ?

MOM2005 vs. OpsMgr2007 and ITIL ?

 

MOM has always been a great tool out of the box because it sort of FORCED you to implement an Incident Management Process to deal with Alerts, as described here:
http://ianblythmanagement.wordpress.com/2006/07/27/mom-2005-and-itil-part-1/
In fact, Alerts had to be actually set to “Resolved”, and this had to be done manually.

I have now been wondering for a while: “How is OpsMgr2007 going to affect this?” I refer to the fact that now OpsMgr2007 does something customers have been asking for a while: it can auto-resolve alerts as soon as the incident/issue is fixed, by monitoring the state of the component rather than waiting for people to resolve it!

Practically, people were often the bottleneck, due to a missing Incident Management Process. MOM has tried for nearly 8 years to push them to implement one… and I feel that it finally gave up even trying.

All the other stuff described in the other two articles of Ian’serie do still apply.

For Capacity Management nothing substantially changes.
Availability Management is greatly improved, with the generic “availability report” and the state roll-up feature provided by the new Health Service and the new ways object are discovered and instantiated and the way their health models work.

Problem Management can also still be done, and Alert tuning will be still required (but it should be slightly easier now, with the improved “overrides” kind of thing).
Service Level Management can also be done – this will actually be done much better: if the system knows you’ve fixed the incident and it closes the alert for you, SLA calculations will be done on the REAL down/up-times of services, not on people keeping the Alerts open forever like I have seen in many places.
This means it will be done better, WITHOUT relying on people.

All in all there are substantial changes in OpsMgr2007, most of them are for the good…. but still, I think, I will be missing the fact that people have to actively look at their consoles and manage Alerts the way they were asked to do before. I will miss all the talks I used to do about “you HAVE to manage your Alerts/Incidents”, now.

MOM 2005 Alerts to RSS feed

I am an RSS Addict, you know that.So I wanted an RSS Feed to show MOM Alerts. I have been thinking of it for a while, last year (or was it the year before?).
It seemed like a logical thing to me: alerts are created (and can be resolved – that is, expire), generally get sorted by the date and the time when they have been created, the look pretty much like a list. Also, many people like to receive mail notification when new alerts are generated.
So, if the alert can be sent to you (push), you could also get to it(pull).
Pretty much the same deal with receiving a mail or reading a newsgroup, or syndicating a feed.

At the time I looked around but it seemed like no one had something like this already done.
So I wrote a very simple RSS feed generator for MOM Alerts.
I did it quite an amount of time ago, just as an exercise.
Then, after a while, I figured out that the MOM 2005 Resource Kit had been updated to include such a utility!

Wow, I thought, they finally added what I have been thinking for a while. Might it be because I mentioned it on an private Mailing list ? Maybe. Maybe not. Who cares. Of course, if it is included in the resource kit it must be way cooler than the one I made, I though.
I really thought something along these lines, but never actually had the time to try it out.
I think I just sort of assumed it must have been cooler than the one I made, since it was part of an official package, while I am not a developer. So I basically forgot about the one I wrote, dismissing it as being crap without looking too much into it anymore.
Until today.
Today I actually tried to use the alert to RSS tool included in the resource kit, because a customer asked if there was any other way to get notified, other than receiving notification or using the console (or the console notifier).
So I looked at the resource kit’s Alert-to-RSS Utility.
My experience with it:
1) it is provided in source code form – which is ok if it was ALSO provided as source. Instead it is ONLY provided as source, and most admins don’t have Visual Studio installed or don’t know how to compile from the command line;
2) Even when they wanted to compile it, it includes a bug which makes it impossible to compile – solution in this newsgroup discussion;
3) if you don’t want to mess about with code since you are using a resource Kit tool (as opposed to something present in the SDK) you can even get it already compiled by someone from somewhere on the net, but that choice is about trust.

Anyway, one way or another, after it is finally set up…. surprise surprise!!!
It does NOT show a LIST of alerts (as I was expecting).
It shows a summary of how many alerts you have. basically it is an RSS feed made of a single item, and this single item tells you how many alerts you have. What is one supposed to do with such a SUMMARY? IMHO, it is useless the way it is. It is even worse than one of those feed that only contains the excerpt of the article, rather than the full article.
Knowing that I have 7 critical errors and 5 warning without actually knowing ANYTHING of them is pointless.
It might be useful for a manager, but not for a sysadmin, at least.

So I thought my version, even if coded crap, might be useful to someone because it gives you a list of alerts (those that are not resolved) and each one of them tells you the description of the alert, the machine tat generated it, and includes links to the actual alert in the web console, so you can click, go there, and start troubleshooting from within your aggregator!
My code does this. Anyway, since I am a crap coder, since I wrote it in only fifteen minutes more than a year ago, and since I don’t have time to fix it and make it nicer… it has several issues, and could be improved in a million ways, in particular for the following aspects:

  1. is currently depends on the SDK Database views – it could use the MOM Server API’s or the webservice instead;
  2. it uses SQL Security to connect to the DB – by default MOM does not allow this – it is suggested for the SQL instance hosting “OnePoint” to only use Windows Integrated Authentication.. so to make my code work you have to switch back to Mixed mode, and create a login in SQL that has permission to read the database. This is due to the fact that I’ve coded this in five minutes and I don’t know how to use delegation – if I was able to use delegation, I would… so that the end user accessing IIS would be the one connecting to the DB. If anybody wants to teach me how to do this, I will be most grateful.
  3. it could accept parameters as URL variables, so to filter out only events for a specific machine, or a specific resolution state, etc etc
  4. At present it uses RSS.Net to generate the feed. It could made independent from it, but I don’t really see why, and I quite like that library.

The code is just an ASP.Net page and its codebehind, no need to compile, but of course you need to change a couple of lines to match your webconsole address.
Also, you need to get RSS.NET and copy its library (RSS.Net.dll) in the /bin subfolder of the website directory where you place the RSSFeed generator page. I see that I wrote this with version 0.86, but any version should do, really.

Here is what it will look like:

AlertToRSS

And here’s the code of the page (two files):

Default.aspx

<%@ Page Language=”C#” AutoEventWireup=”true” CodeFile=”Default.aspx.cs” Inherits=”_Default” %>

Default.aspx.cs

using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Web;
using Rss;

public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string webconsoleaddress = “http://192.168.0.222:1272/AlertDetail.aspx?v=a&sid=” // must change to match your address

// Inizializza il Feed
RssChannel rssChannel = new RssChannel();
rssChannel.Title = “MOM Alerts”
rssChannel.PubDate = DateTime.Now;
rssChannel.Link = new Uri(“http://192.168.0.222:1272/rss/”); // must change to match your address
rssChannel.LastBuildDate = DateTime.Now;
rssChannel.Description = “Contains the latest Alerts”

// query – you might want to change the severity
string mySelectQuery = “SELECT ComputerName, Name, Severity, TimeRaised, RepeatCount, GUID FROM dbo.SDKAlertView WHERE Severity > 10 AND ResolutionState < 255”

// SQL Connection – must change SQL server, user name and password
SqlConnection conn = new SqlConnection(“Data Source=192.168.0.222;Initial Catalog=OnePoint;User ID=rss;Password=rss”);
SqlDataReader rdr = null;

try
{
conn.Open();
SqlCommand cmd = new SqlCommand(mySelectQuery, conn);
rdr = cmd.ExecuteReader();
while (rdr.Read())
{
RssItem rssItem = new RssItem();
string titleField = rdr[1].ToString();
rssItem.Title = titleField;
string url = webconsoleaddress + rdr[5];
rssItem.Link = new Uri(url.ToString());
string description = “<![CDATA[ <p><a xhref=\”” + rssItem.Link + “\”>” + rdr[1] + ” </a></p><br>” + “<br>Computer: ” + rdr[0] + “<br>Repeat Count: ” + rdr[4] + “<BR>Original ALert Time: ” + rdr[3];
rssItem.Description = description;
rssChannel.Items.Add(rssItem);
}

// Finalizza il feed
RssFeed rssFeed = new RssFeed();
rssFeed.Channels.Add(rssChannel);
Response.ContentType = “text/xml”
Response.ExpiresAbsolute = DateTime.MinValue;
rssFeed.Write(Response.OutputStream);
}
finally
{
if (rdr != null)
{
rdr.Close();
}

if (conn != null)
{
conn.Close();
}
}
}
}

On this website we use first or third-party tools that store small files (cookie) on your device. Cookies are normally used to allow the site to run properly (technical cookies), to generate navigation usage reports (statistics cookies) and to suitable advertise our services/products (profiling cookies). We can directly use technical cookies, but you have the right to choose whether or not to enable statistical and profiling cookies. Enabling these cookies, you help us to offer you a better experience. Cookie and Privacy policy