K.I.S.S


The world of software engineering is full of acronyms and terms as long (and longer) than your arm. There are a few super important ones when developing code in an enterprise to ensure tech-debt (and the accompanying Human Debt ) does not build and hamstring the product in the mid to long term.

Keep it simple, stupid!

  • SOLID
  • ACID
  • TDD
  • BDD
  • etc……

However, IMHO, they dont even come close to KISS, Keep it simple, stupid.

Keeping things simple is an art form in itself, and takes time, patience and most of all knowledge to develop code that is as simple as can be.

When I talk to developers now, I highlight the 3 things that are to me, the way code should be written.

1 ) Code Safety.

The code has to be written to not cause memory leaks, or any oddities under stress. Under stress is important, anything that can happen, will happen in a production environment. Once a developer introduced some code that was using tasks, because “Async is faster”. However, safety was not an issue, so when the code that was reading a SQL data set from a multi result set stored procedure was run in parallel, who knew if resultSet 1 was being attempted to be read before resultSet 0.

2) Readability.

The code has to be written so that it is legible and intent is clear. Unreadable, overcomplicated code helps no-one, if it has to be complicated, then that is what comments and to a certain extent tests are for. But, I hear you cry what about performance, the code is complicated cause it has to perform….

3 ) Performance.You are not Google, you are not Amazon, you are not Microsoft, hell odds are, you are not even “Ask Jeaves” ( no offence ). Your scale is not that big, performance is not an issue to you. Code optimizations are rarely worth it, is the code in the 20% or code that is executed 80% of the time. That said, it has to meet the business requirements of getting something done in a reasonable time. On my last job, before I joined, a process was running “to slow”, It was, the code to an age to execute so to compensate a complicated process of SNS,SQS and lambdas was used to throttle (and limit) the throughput of calculations to process overnight. Never one to shirk such a challenge, I rolled up my sleaves and got to work. Problem 1) no telemetry, where is the code spending its time ? How many database calls are occurring ? How many of those are repeated. So get some telemetry in, detailed telemetry of the right depth and terseness may cost, but its going to be worth it. So, dont touch anything until you can measure. Now with telemetry we can see where the code is ( and is not spending its time) and we have meaningful and consistent data that we can work with. Fairly obviously, database calls are the first port of call, caching should be used whenever required, standing data that never changes ( Countries, currencies, ISO lists) should always be cached. Semi permanent data ( Account details, names/addresses etc should be cached for a limited time (5/10 mins depending on use case) but even then some form of “early purge” maybe required. Then volatile data, shopping baskets are, but not the items details, should not be cached. In my experience that is always the biggest win. Not doing something that was previously done is a 100% improvement. Then we can look at ensuring async/await are used on all possible code paths, if necessary, if the execution tree is deep then Task.Wait can be used to prevent over refactoring and making the change to big. Small, measurable improvements are what we are after. ORMs should use readonly connections/options where possible to save the overhead of building unnecessary objects, Dictionaries, in places of iterating arrays, but a surprising killer is Reflection. All in, I took a process that took 24+ hours to run, to something that can run in real time, 2-3secs to do the same work.

Reflection is pretty awesome

If you are exploring internal objects etc, reflection is pretty awesome, we all like to lift the lid and expand our understanding, if that floats your boat much like using trace flags in SQL Server, code that uses it, most probably and more than likely, there is a simpler, clearer and more performant way ( Challenge me: What are using reflection for in production ?)

In the example I gave above, reflection was used to iterate through the elements of an enum to pull an attribute. One of those was an enum of countries, that were required to return an attribute (CountryName).

Consider below the sets of code, same output, but what is clearer ? Why use reflection here, its very opaque, unclear, unsafe (more code than required), harder to maintain and slower, in fact a lot slower.

As I implied above, I work (mostly) on data. How do we get data on which code method is “faster”, well if you care, or even making the claim of “faster”, you should be familiar with DotNetBenchmark.

So, what about the code above ?

| Method       | Mean       | Error    | StdDev   | Ratio |
|------------- |-----------:|---------:|---------:|------:|
| Dictionary   |   817.2 ns |  5.17 ns |  4.59 ns |  0.17 |
| ByReflection | 4,714.0 ns | 33.81 ns | 31.63 ns |  1.00 |

Pretty conclusive I would say and that’s just with a small subset of countries.

That is perhaps not the biggest crime that I’ve seen reflection used for, how about rolling your own dependency injection engine.

Obviously a simplified example, the real version passed multiple parameters to the object constructor. Even using DI here is an over complication.. Lets keep it Simple.

How much clearer is that ?!

The take away, K.I.S.S. There is (in 99.999%) of circumstances no need for reflection in production code, cleaver coding leads to unclear coding, unclear leads for fear, fear leads to statis, statis leads to dead product. A whole mountain of tech debt, because a developer wanted to show off how “clever” they can be.


Leave a Reply

Your email address will not be published. Required fields are marked *