Archive for the 'Networking' Category

RSS Feed for the 'Networking' Category

Repost: Useful SetSPN tips

Wednesday, October 19th, 2011

I just saw that my former colleague (PFE) Tristan has posted an interesting note about the use of SetSPN “–A” vs SetSPN “–S”. I normally don’t repost other people’s content, but I thought this would be useful as there are a few SPN used in OpsMgr and it is not always easy to get them all right… and you can find a few tricks I was not aware of, by reading his post.

Check out the original post at http://blogs.technet.com/b/tristank/archive/2011/10/10/psa-you-really-need-to-update-your-kerberos-setup-documentation.aspx

Does anyone have a new System Center sticker for me?

Saturday, November 27th, 2010

Does anyone have a new System Center sticker?

I got this sticker last APRIL at MMS2010 in JUST ONE COPY, and I waited till I got a NEW laptop in SEPTEMBER to actually use that…
It also took a while to stick it on properly (other than to re-install the PC as I wanted…),  but this week they told me that, for an error, I got given the wrong machine (they did it all themselves, tho – I did not ask for any specific one) and this one needs to be replaced!!!!

This is WORSE than any hardware FAILure, as the machine just works very well and I was expecting to keep it for the next two years :-(

Can anyone be so nice to send me one of those awesome stickers again? :-)

Programmatically Check for Management Pack updates in OpsMgr 2007 R2

Saturday, November 29th, 2008

One of the cool new features of System Center Operations Manager 2007 R2 is the possibility to check and update Management Packs from the catalog on the Internet directly from the Operators Console:

Select Management Packs from Catalog

Even if the backend for this feature is not yet documented, I was extremely curious to see how this had actually been implemented. Especially since it took a while to have this feature available for OpsMgr, I had the suspicion that it could not be as simple as one downloadable XML file, like the old MOM2005's MPNotifier had been using in the past.

Therefore I observed the console's traffic through the lens of my proxy, and got my answer:

ISA Server Log

So that was it: a .Net Web Service.

I tried to ask the web service itself for discovery information, but failed:

WSDL

Since there is no WSDL available, but I badly wanted to interact with it, I had to figure out: what kind of requests would be allowed to it, how should they be written, what methods could they call and what parameters should I pass in the call. In order to get started on this, I thought I could just observe its network traffic. And so I did… I fired up Network Monitor and captured the traffic:

Microsoft Network Monitor 3.2

Microsoft Network Monitor is beautiful and useful for this kind of stuff, as it lets you easily identify which application a given stream of traffic belongs to, just like in the picture above. After I had isolated just the traffic from the Operations Console, I then saved those captures packets in CAP format and opened it again in Wireshark for a different kind of analysis – "Follow TCP Stream":

Wireshark: Follow TCP Stream

This showed me the reassembled conversation, and what kind of request was actually done to the Web Service. That was the information I needed.

Ready to rock at this point, I came up with this Powershell script (to be run in OpsMgr Command Shell) that will:

1) connect to the web service and retrieve the complete MP list for R2 (this part is also useful on its own, as it shows how to interact with a SOAP web service in Powershell, invoking a method of the web service by issuing a specially crafted POST request. To give due credit, for this part I first looked at this PERL code, which I then adapted and ported to Powershell);

2) loop through the results of the "Get-ManagementPack" opsmgr cmdlet and compare each MP found in the Management Group with those pulled from the catalog;

3) display a table of all imported MPs with both the version imported in your Management Group AND the version available on the catalog:

Script output in OpsMgr Command Shell

Remember that this is just SAMPLE code, it is not meant to be used in production environment and it is worth mentioning again that OpsMgr2007 R2 this is BETA software at the time of writing, therefore this functionality (and its implementation) might change at any time, and the script will break. Also, at present, the MP Catalog web service still returns slightly older MP versions and it is not yet kept in sync and updated with MP Releases, but it will be ready and with complete/updated content by the time R2 gets released.

Disclaimer

The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

A Rant about Openness

Friday, May 2nd, 2008

It is interesting to see that a bunch of open source projects written on and for the Microsoft platform grows and grows, and also nice to see that a lot of Microsoft employees are very active and aware of the open source ecosystem, rather than being stuck with only what the company makes. Phil Haack, in a post about an interview to Brad Wilson,  wisely writes:

"[...] What I particularly liked about this post was the insight Brad provides on the diverse views of open source outside and inside of Microsoft as well as his own personal experience contributing to many OSS projects. It's hard for some to believe, but there are developers internal to Microsoft who like and contribute to various open source projects. [...]"

In fact, being made by Microsoft people or not, the list of open source software on CodePlex keeps growing too. Mentioning CodePlex and interviews, another interesting one is that of Sara Ford, Program Manager for CodePlex posted on Microspotting. But Microspotting is awesome in general. My favorite quote by her:

"[...] Hey. My name is Ariel and I'm the person you thought would never work at MSFT [...]".

In fact, just as I do, she is running that blog on WordPress, posting her photos on Flickr, using a RSS feed on Feedburner and in general using a bunch of things that are out there that might be seen as "competing" with what Microsoft makes. In fact, this attitude towards other products and vendors on the market is what I am mainly interested in. Should we only use flagship products? Sure, when they help us, but not necessarily. Who cares? People's blogs are not, as someone would like them to be, a coordinated marketing effort. This is about real people, real geeks, who just want to share and communicate personal ideas and thoughts. I had a blog before being at Microsoft, after all. Obviously I had exposure to competing products. My server was running LAMP on Novell Netware in 2002 – after which I moved it to Linux. It is not a big deal. And if I try to put things in perspective, in fact, this is turning out to be an advantage. I am saying this, as the latest news about interoperability comes from MMS (Microsoft Management Summit): and that is the announcement that System Center Operations Manager will monitor Linux natively. I find this to be extremely exciting, and a step in the right direction… to say it all I am LOVING this!!! But at the same time I see some other colleagues in technical support that are worrying and being scared by this – "if we do monitor Linux and Unix, we are supposed to have at least some knowledge on those systems", they are asking. Right. We probably do. At the moment there are probably only a limited number of people that actually can do that, at least in my division. But this is because in the past they must have sacrificed their own curiosity to become "experts" in some very narrow and "specialized" thing. Here we go. On the opposite, I kept using Linux – even when other "old school" employees would call me names. All of a sudden, someone else realizes my advantage.  …but a lot of geeks already understood the power of exploration, and won't stop defining people by easy labels. Another cool quote I read the other day is what Jimmy Schementi has written in his Flickr profile:

"[...] I try to do everything, and sometimes I get lucky and get good at something [...]".

Reading on his blog it looks like he also gave up on trying to write a Twitter plugin for MSNLive Messenger (or maybe he never tried, but at least I wanted to do that, instead) and wrote it for Pidgin instead.  Why did he do that ? I don't know, I suppose because it was quicker/easier – and there were API's and code samples to start from.

The bottom line, for me, is that geeks are interested in figuring out cool things (no matter what language or technology they use) and eventually communicating them. They tend to be pioneers of technologies. They try out new stuff. Open Source development is a lot about agility and "trying out" new things. Another passage of Brad's interview says:

"[...] That's true–the open source projects I contribute to tend to be the "by developer, for developer" kind, although I also consume things that are less about development [...] Like one tool that I've used forever is the GIMP graphics editor, which I love a lot".

That holds true, when you consider that a lot of these things are not really mainstream. Tools made "by developer, for developer" are usually a sort of experimental ground. Like Twitter. Every geek is talking about Twitter these days, but you can't really say that it is mainstream. Twitter has quite a bunch of interesting aspects, though, and that's why geeks are on it. Twitter lets me keep up-to-date quicker and better (and with a personal, conversational touch) even better than RSS feeds and blogs do. Also, there are a lot of Microsofties on Twitter. And the cool thing is that yo can really talk to everybody, at any level. Not just everybody "gets" blogs, social networks, and microblogging. Of course you cannot expect everybody to be on top of the tech news, or use experimental technologies. So in a way stuff like Twitter is "by geeks, for geeks" (not really just for developers – there's a lot of "media" people on Twitter). Pretty much in the same way, a lot of people I work with (at direct contact, everyday) only found out about LinkedIN during this year (2008!). I joined Orkut and LinkedIN in 2004. Orkut was in private beta, back then. A lot of this stuff never becomes mainstream, some does. But it is cool to discover it when it gets born. How long did it take for Social Networking to become mainstream? So long that when it is mainstream for others, I have seen it for so long that I am even getting tired of it.

For some reason, geeks love to be pioneers. This is well expressed in a digression by Chris Pratley:

"[...] some of them we will be putting out on officelabs.com for the general public (you folks!) to try so we can understand how "normal" people would use these tools. Now of course, as we bloggers and blog-readers know, we're not actually normal – you could even debate whether the blogosphere is more warped than the set of Microsoft employees, who comprise an interesting cross-section of job types, experiences, and cultures. But I digress. [...]"

But I have been digressing, too, all along. As usual.

Why do developers tend to forget about people behind proxy servers ?

Monday, August 13th, 2007

I know this is a very common issue.

I keep finding way too many software that claim to interact with Web 2.0 sites or services, and connect here or there…. still forgetting one basic simple rule, that is: letting people use a proxy.

Most programmers for some reasons just assume that since they are directly connected to the internet, everybody is. Which isn't always the case. Most companies have proxies and will only let you out to port 80 – by using their proxy.

…which in turn is one of the reasons why most applications now "talk" and tunnel whatever application protocol on top of HTTP… still a lot of softwares simply "forget" or don't care proving a simple checkbox "use proxy", which will translate in two or three extra lines of code… three lines which I personally usually include in my projects, when I am not even a *developer*!! (but that might explain why I *think* of it… I come from a security and networking background :-))

I thought of writing this post after having read this post by Saqib Ullah.

Anyway. I keep finding this thing over and over again. Both in simple, hobbyist, sample and/or in complex, big, expensive enterprise software. Last time I got pissed off about a piece of code missing this feature was some days ago when testing http://www.codeplex.com/FacebookToolkit. The previous time was during Windows Vista beta-testing (I had found a similar issue in beta2, and had it fixed for RC1.)

Actually, I am being polite saying it is "missing a feature". To be honest I think missing this "feature" would have to be considered a bug: every piece of software using HTTP *should* include the possibility to pass thorugh proxy (also, don't forget about  AUTHENTICATED proxies), or the purpose of using HTTP in the first place is defeated!!

Developers!!! You have to remember people ARE behind proxies !!!!!

MOM 2005 Alerts to RSS feed

Thursday, March 22nd, 2007

I am an RSS Addict, you know that.So I wanted an RSS Feed to show MOM Alerts. I have been thinking of it for a while, last year (or was it the year before?).
It seemed like a logical thing to me: alerts are created (and can be resolved – that is, expire), generally get sorted by the date and the time when they have been created, the look pretty much like a list. Also, many people like to receive mail notification when new alerts are generated.
So, if the alert can be sent to you (push), you could also get to it(pull).
Pretty much the same deal with receiving a mail or reading a newsgroup, or syndicating a feed.

At the time I looked around but it seemed like no one had something like this already done.
So I wrote a very simple RSS feed generator for MOM Alerts.
I did it quite an amount of time ago, just as an exercise.
Then, after a while, I figured out that the MOM 2005 Resource Kit had been updated to include such a utility!

Wow, I thought, they finally added what I have been thinking for a while. Might it be because I mentioned it on an private Mailing list ? Maybe. Maybe not. Who cares. Of course, if it is included in the resource kit it must be way cooler than the one I made, I though.
I really thought something along these lines, but never actually had the time to try it out.
I think I just sort of assumed it must have been cooler than the one I made, since it was part of an official package, while I am not a developer. So I basically forgot about the one I wrote, dismissing it as being crap without looking too much into it anymore.
Until today.
Today I actually tried to use the alert to RSS tool included in the resource kit, because a customer asked if there was any other way to get notified, other than receiving notification or using the console (or the console notifier).
So I looked at the resource kit's Alert-to-RSS Utility.
My experience with it:
1) it is provided in source code form – which is ok if it was ALSO provided as source. Instead it is ONLY provided as source, and most admins don't have Visual Studio installed or don't know how to compile from the command line;
2) Even when they wanted to compile it, it includes a bug which makes it impossible to compile – solution in this newsgroup discussion;
3) if you don't want to mess about with code since you are using a resource Kit tool (as opposed to something present in the SDK) you can even get it already compiled by someone from somewhere on the net, but that choice is about trust.

Anyway, one way or another, after it is finally set up…. surprise surprise!!!
It does NOT show a LIST of alerts (as I was expecting).
It shows a summary of how many alerts you have. basically it is an RSS feed made of a single item, and this single item tells you how many alerts you have. What is one supposed to do with such a SUMMARY? IMHO, it is useless the way it is. It is even worse than one of those feed that only contains the excerpt of the article, rather than the full article.
Knowing that I have 7 critical errors and 5 warning without actually knowing ANYTHING of them is pointless.
It might be useful for a manager, but not for a sysadmin, at least.

So I thought my version, even if coded crap, might be useful to someone because it gives you a list of alerts (those that are not resolved) and each one of them tells you the description of the alert, the machine tat generated it, and includes links to the actual alert in the web console, so you can click, go there, and start troubleshooting from within your aggregator!
My code does this. Anyway, since I am a crap coder, since I wrote it in only fifteen minutes more than a year ago, and since I don't have time to fix it and make it nicer… it has several issues, and could be improved in a million ways, in particular for the following aspects:

  1. is currently depends on the SDK Database views – it could use the MOM Server API's or the webservice instead;
  2. it uses SQL Security to connect to the DB – by default MOM does not allow this – it is suggested for the SQL instance hosting "OnePoint" to only use Windows Integrated Authentication.. so to make my code work you have to switch back to Mixed mode, and create a login in SQL that has permission to read the database. This is due to the fact that I've coded this in five minutes and I don't know how to use delegation – if I was able to use delegation, I would… so that the end user accessing IIS would be the one connecting to the DB. If anybody wants to teach me how to do this, I will be most grateful.
  3. it could accept parameters as URL variables, so to filter out only events for a specific machine, or a specific resolution state, etc etc
  4. At present it uses RSS.Net to generate the feed. It could made independent from it, but I don't really see why, and I quite like that library.

The code is just an ASP.Net page and its codebehind, no need to compile, but of course you need to change a couple of lines to match your webconsole address.
Also, you need to get RSS.NET and copy its library (RSS.Net.dll) in the /bin subfolder of the website directory where you place the RSSFeed generator page. I see that I wrote this with version 0.86, but any version should do, really.

Here is what it will look like:

AlertToRSS

And here's the code of the page (two files):

Default.aspx

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>

Default.aspx.cs

using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Web;
using Rss;

public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string webconsoleaddress = "http://192.168.0.222:1272/AlertDetail.aspx?v=a&sid=" // must change to match your address

// Inizializza il Feed
RssChannel rssChannel = new RssChannel();
rssChannel.Title = "MOM Alerts"
rssChannel.PubDate = DateTime.Now;
rssChannel.Link = new Uri("http://192.168.0.222:1272/rss/"); // must change to match your address
rssChannel.LastBuildDate = DateTime.Now;
rssChannel.Description = "Contains the latest Alerts"

// query – you might want to change the severity
string mySelectQuery = "SELECT ComputerName, Name, Severity, TimeRaised, RepeatCount, GUID FROM dbo.SDKAlertView WHERE Severity > 10 AND ResolutionState < 255"

// SQL Connection – must change SQL server, user name and password
SqlConnection conn = new SqlConnection("Data Source=192.168.0.222;Initial Catalog=OnePoint;User ID=rss;Password=rss");
SqlDataReader rdr = null;

try
{
conn.Open();
SqlCommand cmd = new SqlCommand(mySelectQuery, conn);
rdr = cmd.ExecuteReader();
while (rdr.Read())
{
RssItem rssItem = new RssItem();
string titleField = rdr[1].ToString();
rssItem.Title = titleField;
string url = webconsoleaddress + rdr[5];
rssItem.Link = new Uri(url.ToString());
string description = "<![CDATA[ <p><a xhref=\"" + rssItem.Link + "\">" + rdr[1] + " </a></p><br>" + "<br>Computer: " + rdr[0] + "<br>Repeat Count: " + rdr[4] + "<BR>Original ALert Time: " + rdr[3];
rssItem.Description = description;
rssChannel.Items.Add(rssItem);
}

// Finalizza il feed
RssFeed rssFeed = new RssFeed();
rssFeed.Channels.Add(rssChannel);
Response.ContentType = "text/xml"
Response.ExpiresAbsolute = DateTime.MinValue;
rssFeed.Write(Response.OutputStream);
}
finally
{
if (rdr != null)
{
rdr.Close();
}

if (conn != null)
{
conn.Close();
}
}
}
}

Out-Blog!

Friday, November 24th, 2006

[Edited again 25th November - Jachym gave me some suggestions and insights on the use of parameters, and I slightly changed/fixed the original code I had posted yesterday. There are still some more things that could be improved, of course, but I'll leave them to the future, next time I'll have time fot it (who knows when that will be?)]

This one is a post regarding my first test writing a cmdlet for PowerShell. After a few days since having change my blog's title to "$daniele.rant | Out-Blog" (where Out-Blog was a fantasy cmdlet name, and the title just meant to mimick PowerShell syntax in a funny way), I stumbled across this wonderful blog post: http://blog.boschin.it/archive/2006/09/21/4375.aspx that describes how to use the assemblies of "Windows Live Writer". Then I saw the light: I could actually implement an "Out-Blog" cmdlet. I am not sure what this could be useful for… but I thought it was funny to experiment with. I followed the HOW TO information on this other blog post to guide me through the coding: http://www.proudlyserving.com/archives/2005/10/lets_all_write_1.html

The result is the code that follows. you see is pretty much Boschin's code wrapped into a cmdlet class. Nothing fancy. Just a test. I thought someone might find it interesting. It is provided "AS IS", mainly for educational purpose (MINE, only mine…. I'm the one whose education is being improved, not you :-))

 

 

using System;

using System.Collections.Generic;

using System.Text;

using System.Management.Automation;

using WindowsLive.Writer.BlogClient.Clients;

using WindowsLive.Writer.BlogClient;

using WindowsLive.Writer.CoreServices;

using WindowsLive.Writer.CoreServices.Settings;

using WindowsLive.Writer.Extensibility.BlogClient;

using Microsoft.Win32;

 

 

namespace LiveWriterCmdlet

{

[Cmdlet("out", "blog", SupportsShouldProcess=true)]

 

public sealed class OutBlogCmdlet : Cmdlet

{

[Parameter(Position = 0, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string Title

{

get { return _title; }

set { _title = value; }

}

private string _title;

 

[Parameter(Position=1,Mandatory=true,ValueFromPipeline=true,ValueFromPipelineByPropertyName=true)]

[ValidateNotNullOrEmpty]

public string Text

{

get { return _text; }

set { _text = value; }

}

private string _text;

 

[Parameter(Position = 2, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string BlogApiEndPoint

{

get { return _blogapiendpoint; }

set { _blogapiendpoint = value; }

}

private string _blogapiendpoint;

 

[Parameter(Position = 3, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string UserName

{

get { return _username; }

set { _username = value; }

}

private string _username;

 

[Parameter(Position = 4, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string Password

{

get { return _password; }

set { _password = value; }

}

private string _password;

 

 

[Parameter(Position = 6, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string ProxyAddress

{

get { return _proxyaddress; }

set { _proxyaddress = value; }

}

private string _proxyaddress;

 

[Parameter(Position = 7, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public int ProxyPort

{

get { return _proxyport; }

set { _proxyport = value; }

}

private int _proxyport;

 

[Parameter(Position = 8, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public string ProxyUserName

{

get { return _proxyusername; }

set { _proxyusername = value; }

}

private string _proxyusername;

 

[Parameter(Position = 9, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public string ProxyPassword

{

get { return _proxypassword; }

set { _proxypassword = value; }

}

private string _proxypassword;

 

[Parameter(Position = 10, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public SwitchParameter Published

{

get { return _published; }

set { _published = value; }

}

private bool _published;

 

 

 

 

protected override void BeginProcessing()

{

base.BeginProcessing();

 

 

ApplicationEnvironment.Initialize();

if ((ProxyAddress != null) | (ProxyAddress != ""))

{

WebProxySettings.ProxyEnabled = true;

WebProxySettings.Hostname = ProxyAddress;

WebProxySettings.Port = ProxyPort;

WebProxySettings.Username = ProxyUserName;

WebProxySettings.Password = ProxyPassword;

 

}

else

{

WebProxySettings.ProxyEnabled = false;

}

 

}

 

 

 

 

protected override void ProcessRecord()

{

if (ShouldProcess(Text))

{

ISettingsPersister persister = new RegistrySettingsPersister(Registry.CurrentUser, @"Software\Windows Live Writer");

IBlogCredentials credentials = new BlogCredentials(new SettingsPersisterHelper(persister));

IBlogCredentialsAccessor credentialsAccessor = new BlogCredentialsAccessor("dummy-value", credentials);

 

credentials.Username = UserName;

credentials.Password = Password;

 

MovableTypeClient client = new MovableTypeClient(new Uri(BlogApiEndPoint), credentialsAccessor, PostFormatOptions.Unknown);

 

 

BlogPost MyPost = new BlogPost();

MyPost.Title = Title;

MyPost.Contents = Text;

client.NewPost("dummy-value", MyPost, Published);

 

WriteVerbose("Posted Successfully.");

 

}

 

}

 

}

}

Email talk on Port25

Monday, November 20th, 2006

Interesting interview to Eric Allman on Port25.
He talks of the future of email, of SenderID, of sendmail… of openness and interoperation.
Very interesting.
With the change in licensing of SenderID, let's how quick this gets picked up by Wietse Venema

Time Capsule

Thursday, October 12th, 2006

Yahoo has done it again. Yet another cool photographic site: Time Capsule.
They show every day that they really GET the community thing. Thumbs up for them.

On a side note, I honestly ignore why do you need to UPLOAD photos there and you can't just LINK or REFERENCE photos you've already uploaded on Flickr (isn't it a site they bought ?). So I don't see the effort in any INTEGRATION here. I don't get why. If I had a photo platform like Flickr I would use its Web API to let registered users just "PASS OVER" some pictures from one site to the other.

But, hey…. regardless…..it looks really COOL. It really does. More cool than actually USEFUL (it reminds me of Intrusion Detection Systems… but I digress), but that's how this amount of community things are. It's not useful for your business, but it is good for your heart.

Google has pissed me off this week!

Saturday, October 7th, 2006

Now I pretty much liked GMail and Google in general. But this time they REALLY pissed me off! I will tell you that I am not a google-hater even if I work for a competing company. Of course not everything that Google does is wonderful, but some of their services are really cool and useful and I have never denied to say they rocked when I felt they did.
In general, people seem to love them, and their stock value shows it (with the launch of "Code Search" this week they made a lot of people scream "how cool is this" so that they got back from just under 400 dollars to 417!). But that's not the issue. That is cool, that works. It's ok they make money if they make cool tools. It's fine for me.

In fact i consider GMail as being one of the best interface for reading mail that exist out there – I love "tagging" (oops: it's called "labelling" in their syntax), speed of search through messages (even tough Outlook 2007 is faster on indexed content, but still you have to buy it and install it on your PC)… I also especially love the way it shows THREADING… so that I moved pretty much EVERY mailing list I read on their account:

Ma come se fa ?
(ok, they could do better with the localized version of "Re:" in replies…. in Italian a lot of broken MUA's translate that into "R:" and that isn't understood by GMail and will make it think it is another thread…. but that's a minor issue, and also one that every MUA handling threading has – including "mutt" – the real problem is the broken MUAs sending the "R:" in the first place. But I digress too much….).

I also keep GMail continuosly opened in a browser during the day because a lot of informative mail and that sent by friends goes there. This to say that I do get a lot of their ads (that is – the point of having such an application, for them…). On the contrary, Windows Live Mail reduced its ads to show only one… not to annoy you too much.
But the ads in GMail were not *really* a problem (I don't read them anyway, I just plain IGNORE THEM).

But this week they REALLY pissed me off. They REALLY have. And here is the reason:
I have been using a script for MONTHS to backup my database (the one powering THIS blog) and send it "off-site" to my GMail mailbox. Pretty much something like a lot of other people do, described in various articles and blog posts. Then I was labelling them with a rule, so that I could access my backups easily in case I needed them.

Now I don't know if this violates their terms of use in any way… because I am not really using it as storage with those programs that circulated at one stage that had "reverse engineered" it. Those were bypassing the web interface altogether so people did use it as storage with a program without having to see their ads. That was the issue, I think. In my case, I am just sending MAILS to myself. One per day. I also delete the old ones every now and then, and they are not even huge in sized (attachments of 40 to 50KB so far!!)… anyway, I know a lot of people that store documents and all sort of stuff even in their corporate mailboxes in Outlook (then maybe index them with Windows Desktop Search of Google Desktop to find it back)… I was only doing the same with GMail. I don't see the big issue here….. they might think otherwise…. but from what happens I don't think that's the issue.

Anyway, now it's been three or four days that my backup mail gets rejected. My SMTP Server gets told:

host gmail-smtp-in.l.google.com[66.249.83.27] said:
550-5.7.1 Our system has detected an unusual amount of unsolicited
550-5.7.1 mail originating from your IP address. To protect our
550-5.7.1 users from spam, mail sent from your IP address has been
550-5.7.1 rejected. Please visit
550-5.7.1 http://www.google.com/mail/help/bulk_mail.html to review
550 5.7.1 our Bulk Email Senders Guidelines.

Now for fuck's sake. You know how much I hate SPAMMERS and what I would like to do with them. But I also know that it does happen to end up in RBLs and such sometimes. Fine. But GIVE ME a way to tell you that I am NOT one! If you go to the link above, all you find is a form where you can specify that mail that ended up in your "junk" folder actually wasn't spam. Yeah, right. In my case it does not even go into my "junk" folder! How am I supposed to give me the original header that arrived to THEM if I only have the one sent by my mailserver ? They just blacklisted my mail server's IP Address! As they say, I even have an SPF record, I always use the same address, etc….
So I tried to fill in the form, the day after I also tried to contact their abuse@google.com and abuse@gmail.com addresses.
Still nothing.
They even tell you (in the automated reply when you contact "abuse":
"[...] For privacy and security reasons, we may not reveal the final outcome of an abuse case to the person who reported it. [...]".
How great. How am I supposed to know if they even READ my complaint ?

You anti-spam people at GMail: "I am NOT a fucking spammer!!!!!". I 'haven't found a better way to tell ya this, you know, than writing it on my blog… this is just RIDICULOUS!

But to date my mails still get dropped. I'll probably have to send my backups somewhere else. At this point they pissed me off so much that I am also seriously considering getting back to use my own mailserver also for receiving and reading my mailing lists. Then I won't get ads there.
Afzetterij!
(I hope you have some dutch guy on board at Google, as "Google Translate" does not translate from/to dutch yet…. )

Edited on October, 8th - While GMail REJECTS those mails (it SAYS it is not accepting them), Hotmail simply DROPS them (that is: it does not even SAY it is not accepting them):

to=, relay=mx4.hotmail.com[65.54.245.104], delay=3, status=sent (250 <20061008061010.GA19807@muscetta.com> Queued mail for delivery)

This way you THINK it is going to be delivered, but it NEVER shows up in your inbox. I don't know who's behaving the worst…