This is going tobe a quick rant.
In my line of work, I deal with a couple of vendors who have sold my employer .NET (ASP 2+) applications and utilities (both of which are usually written in C#). I appreciate the fact that .NET makes a lot of stuff pretty quick, trivial and really maximizes the time of each programmer using it. Also nice is the fact that .NET works well on any machine with .NET installed (and even in some cases, Mono).
What annoys me is the fact that programmers, I think, often rely on the fastest and least efficient solution instead of looking at the problem and finding the best solution.
For instance, one application with which I am working puts out a huge (200MB or so) XML file. Problem is, the data source which is the “feed” for this XML has some quirky data which results in elements/nodes needing to be removed. The vendor “gave us help” and provided a utility which strips the bad stuff out but it also reads the ENTIRE XML file into memory. Based on what is happening, it looks like they are using the XmlDocument class.
Yes, that class is useful and I am sure it’s useful to have the whole XML file in DOM and in memory. It’s really bloody inefficient, though. Yes, it’s simple to use but the compiler is doing a lot of complex things. Instead, my solution:
- Open file
- Read line-by line 1. Look for bad data
- Keep track of where “it” is relative to the current element
- If the element is “OK” flush it to a temporary file
- Overwrite the input file with the temporary file
All said and done, it takes about 1/3 the time and about 1/30th the memory. Plus, the vendor hard-coded the items “to look for” in the application. Seriously, using an App.config file is trivial in .NET 3.5 and there is no reason this should not use 3.5 since it’s a stand-alone console application.
Just because it looks simple to the programmer does not mean it’s simple for the computer, too.