I think the article is flawed.
1. In response to an earlier comment “They're not DB people, they just have a product and they've highlighted one small part of their product that can show what's happening in the database.”
I can obtain very similar information from the database server query log or other tools. There is nothing special about the highlighted feature and the example is biased.
Of course you can, nobody is suggesting the product is magical.
You can look at code in vi or Notepad, it doesn't make an IDE redundant.
But such products bring the information together so you can collate and drill down through such information. And I also said looking at database queries was a small part of the product.
2. In the article the diagram displays '3,510 rows found'. The next sentence reads:-
"As shown above, to process one web request this application is making over 3000 database calls."
When the statement is read in isolation, given the context of the article is "How to Exploit Your Database for Better Performance"
An application that makes 3,000 database calls to service 1 http request would trigger 1 very large alarm bell!
Very true, how would you find one http request triggered 3000 database calls? That's why there's a market for products like DynaTrace.
3. Then when I did correlate the statement to the diagram, maybe the statement made by the author was just incorrect?
If 3,510 rows were returned from the database, surely this does not mean an equal equivalent 3,000 odd queries were executed?
The rows refer to the queries/updates executed, as most databases store this information in database tables anyway, the 3150 rows will be the database audit of SQL statements, nothing related to the business data involved.
So in this case, 3000 odd queries were indeed executed.
A strategy to treat the database as ‘just a data store’ is OK. I think the article is flawed, as it makes an assumption that this strategy inherits large volumes of database queries per web request. The author also implies architects strategy is at fault.
It is also not clear if the example provided was for a typical application request or for a reporting request. If the example is for a reporting request the author ends the article by suggesting complex reporting queries can be more efficiently handled in the database tier. So the most complex request may have been used as the example, architects strategies scrutinised, but the author concludes by saying it doesn’t matter anyway because it should be a stored procedure.
But the example perhaps showed the architecture in this particular use case was wrong. And if that's the case, it doesn't mean the architecture is wrong, just that is doesn't apply to 100% of cases.
I'd never design reporting solutions that used pure ORM unless the queries were very simple and low volume, its much more likely that optimised SQL or stored procedures would be needed for heavy reports.
I suggest the performance problems are more sourced not at the storage strategy or products but at the developer’s lack of knowledge in the usage of O/R and other tools when writing code that will access the database.
Completely agree. My concern with ORM is that it is used without sufficient knowledge of databases.
I have witnessed this many times, most recently in a 3rd party supplied web application highlighting this exact issue. It had nothing to do witht he database, architect or server.
And if it was a third party app, imagine how difficult it would be to find the problem without monitoring. YOu might not have the source code, the supplier might not have a database similar to your production system to be able to replicate the problem.