notes from a passionate developer

Developer that lives by the mantra "code is meant to be shared".




This is a personal blog. The opinions expressed here represent my own and not those of my employer, nor current or previous. All content is published "as is", without warranty of any kind and I don't take any responsibility and can't be liable for any claims, damages or other liabilities that might be caused by the content.

Honored to have become a tag in someone else's blog

Daniel WertheimDaniel Wertheim

I’m actually honored to have become an actual tag in someone else’s blog. Thanks “David Zych”, kudos to you. Thanks for questioning my previous post.

a made it to become a tag 02

To be clear, I just wanted to highlight the difference in performance of it as I find these kind of stuff interesting. I emphasized on the relational value 400x as it’s generally more easy for people to grasp. The numbers are there as well so that people, like you did, could come to their own conclusions. But I’m not saying anyone should mindlessly switch. More know about it and reflect over it, maybe in cases where it’s used as a tuple list. Is that what they are for? Or more for usage of bit flags?

My test, as yours, was not scientific at all. The numbers were just picked out of the air. 10000 in it self could represent request against a site. You are absolutely right. If you compare it to other parts of the system, the total “waste” is probably not much at all. It will most likely not show up as a hot-spot in your profiler. Not even if you are using a NoSQL document store that stores JSON, and your simple request stores a couple of theses docs that during the serialization evaluates the ToString. Even if this further more also led to a message being serialized and dispatched, handled in another bounded context, picked up by a command handler that invokes some action(s) on an aggregate root producing an series of events, first being serialized to be event-sourced in an event-store, then being serialized in the dispatcher that routes to a couple of listeners/handlers for materializing different read models and kicking of new sub-processes in other bounded contexts.

10000req => persisting two documents with two enums => issuing a command with two enums over a bus => one subscriber consumes and in an aggregate three new events with two enums are dispatched over queues and stored in event store. => There are three listeners that aggregates read models etc by storing JSON from the event.

Seems like it’s not that hard to reach something like this:

10000 * ((2 * 2) + (1 * 2) + (3 * 2) + ((3 * 2) * 3)) = 300000 serializations

(A:Enum.ToString) 0.0016ms * 300000 = 480ms
(B:Static) 4e−7 * 300000 = 1.2ms

With something like 50000req it starts to show, but that’s on different processes etc and have to be compared against the total time of the processing (<—NOTE!).

50000req => 1500000 serializations

(A:Enum.ToString) 0.0016ms * 1500000 = 2400ms
(B:Static) 4e−7 * 1500000= 6ms

Another simple sample for getting to that amount of usage of Enum.ToString (doing it wrong as you put it) could be a simple transaction log that stores one or more records per request.

Thanks for highlighting this. I think it’s awesome with people that question and give constructive criticism.



Developer that lives by the mantra "code is meant to be shared".