ORM – like we’ve always said

Brian Oliver’s post (http://brianoliver.wordpress.com/2008/11/26/terracotta-chooses-oracle-technology-for-high-availability-and-performance – now removed for some reason) about Terracota has reminded that I meant to blog about this product last week.

It’s a long established behaviour that we database specialists do not have the fondest of spots for ORM tools such as Hibernate, the inefficient way in which they deal with the database and the horrendous SQL that they sometimes generate. (Some of my previous can be found here)

Last week I came across this article, encouragingly ;-) titled “Hibernate without Database Bottlenecks”.

The fact that products such as Terracotta exist and are becoming more popular is finally sort of proof of what we’ve always said – that the row-by-row processing of Hibernate doesn’t work very well in the database (where it’s important to be thinking in sets).

‘Nuff said.

P.S. If that article requires registration, then to summarise:


One of the most prevalent application architectures today is that of a stateless application that maps object data into the database to be stored in relational format, and Hibernate is the most popular way to perform this object-relational mapping. Applications are designed this way for two reasons. First, scalability at the database server is a known and tunable quantity. Second, availability of the database is much closer to “five nines” than that of the application server. Despite these reasons, the burden that shared Java state places on the database and on the application developer is very high. While Hibernate lessens the developer’s workload in having to interface to a database, Terracotta lessens Hibernate’s need to depend on the database for availability and scalability in the first place. The marriage of Terracotta and Hibernate simplifies application development and greatly improves application performance.

To cut a long story short, there’s an in-memory database for Hibernate to hammer instead. Can people seriously think that that is the solution?

About these ads

5 Responses to ORM – like we’ve always said

  1. dombrooks says:

    I’ve literally just received an email from someone at Terracotta offering me a training course. – presumably as a result of me having to register to read that Terracotta pdf.

    The first line of the training email reads “Recently one of our largest customers told us they save 10,000,000 DB calls a day and no longer need an expensive RAC system. They have 0% custom clustering code.
    Want to learn how to do that? ”

    I think we already know don’t we?

    Hibernate can turn 1 transaction into N calls to the database. It’s not rocket science…

  2. ARI ZILKA says:

    I may be barking up the wrong tree here but I think Hibernate–scratch that, ORM–is fundamentally flawed for certain classes of data. I wrote a blog about those classes here: http://blog.terracottatech.com/2008/11/breaking_down_the_relational_d.html

    What do you guys think?

  3. Tim Hall says:

    Hi.

    I don’t know how many times I’ve had to convert a bunch of middle tier processing into packaged procedures on the database to get any sort of performance.

    Much of the middle tier performance technology seems to server one purpose alone and that is to force you into buying more kit for the middle tier when it could be solved easily on the database.

    You gotta laugh. :)

    Cheers

    Tim…

  4. dombrooks says:

    But “they” say that the database doesn’t scale… ;-)

  5. ARI ZILKA says:

    Tim,

    Not sure I can agree that the DB and stored procs can be called “faster” than the middle tier. The 1st question is what the data being altered is all about. If the stored proc is editing DB data, and operating on multiple records at once it could be _much_ faster.

    But there is just no way the DB can beat an in-memory impl. of a workflow / state machine if nothing ever needed to get stored in the DB otherwise. Just no way the impedance mismatch can overcome a pure in-memory CPU-based alg.

    This is the basis of “Network Attached Memory”. Do all your operations in local mem, but have that memory durable and shared across machines.

    BTW, the solutions you “replaced”; were they clustered meaning replicating state via JMS, JGroups, or something else? Such overhead would explain the slowness.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 70 other followers

%d bloggers like this: