Archive for the Category 'gorm'

Building GORM Criteria Queries Dynamically

Monday, March 21st, 2016

I originally wrote most of the queries in the spring-security-ui plugin using HQL because I find it more intuitive than criteria queries, but HQL only works with Hibernate and relational databases. A pull request updated the queries to use criteria queries to allow the plugin to be used with NoSQL datastores, but one query didn’t fit the programming style that I was using. That wasn’t a big deal, but since a lot of the controller code is basically CRUD code and very similar to the others, I’ve tried to keep the code generic and push shared logic into the base classes.

The original HQL included this

hql.append " AND e.aclObjectIdentity.aclClass.id=:aclClass"

and the converted criteria code was

aclObjectIdentity {
   aclClass {
      eq 'id', params.long('aclClass')
   }
}

with the whole query being similar to this:

def results = lookupClass().createCriteria().list(max: max, offset: offset) {
   // other standard criteria method calls

   if (params.aclClass) {
      aclObjectIdentity {
         aclClass {
            eq 'id', params.long('aclClass')
         }
      }
   }
}

That got me thinking about creating a way to represent that two-level projection and criterion generically.

If we restore the omitted optional parentheses the code becomes

aclObjectIdentity({
   aclClass({
      eq('id', params.long('aclClass'))
   })
})

So it should be more clear that this is a sequence of method calls; calling aclObjectIdentity with a closure argument, then aclClass with a closure argument, and finally eq with a String and a long argument. Splitting out the closures as local variables makes it more clear, first as

def aclClassClosure = {
   eq('id', params.long('aclClass'))
}

aclObjectIdentity({
   aclClass(aclClassClosure)
})

and then

def aclClassClosure = {
   eq 'id', params.long('aclClass')
}

def aclObjectIdentityClosure = {
   aclClass(aclClassClosure)
}

aclObjectIdentity(aclObjectIdentityClosure)

To make this a bit more concrete, lets say we have three domain classes;

Department:

class Department {
   String name
}

Manager:

class Manager {
   String name
   Department department
}

and Employee:

class Employee {
   String name
   Manager manager
}

We create some instances:

Department d = new Department(name: 'department1').save()
Manager m = new Manager(name: 'manager1', department: d).save()
Employee e1 = new Employee(name: 'employee1', manager: m).save()
Employee e2 = new Employee(name: 'employee2', manager: m).save()

and later want to run a query:

Employee.createCriteria().list(max: 10, offset: 0) {
   eq 'name', 'employee1'

   manager {
      department {
         eq 'name', 'department1'
      }
   }
}

My goal is to represent this query with only some helper methods and without any closures (or as few as possible). Splitting that out like above we have

def departmentClosure = {
   eq 'name', 'department1'
}

def managerClosure = {
   department(departmentClosure)
}

def criteriaClosure = {
   eq 'name', 'employee1'

   manager(managerClosure)
}

Employee.createCriteria().list([max: 10, offset: 0], criteriaClosure)

When the query is run, the delegate of criteriaClosure is set to an instance of HibernateCriteriaBuilder when using Hibernate, or an analogous builder for MongoDB or whatever other GORM implementation you’re using. The builder has defined methods for eq, like, between, etc., so when you make those calls in your criteria closure they’re run on the builder.

It turns out that it works the same way if you split the closure into multiple closures and call them with the builder as the delegate for each. So a method like this works:

def runCriteria(Class clazz, List<Closure> criterias, Map paginateParams) {
   clazz.createCriteria().list(paginateParams) {
      for (Closure criteria in criterias) {
         criteria.delegate = delegate
         criteria()
      }
   }
}

and that means that we can split

Employee.createCriteria().list(max: 10, offset: 0) {
   eq 'name', 'employee1'

   manager {
      department {
         eq 'name', 'department1'
      }
   }
}

into

def closure1 = {
   eq 'name', 'employee1'
}

def closure2 = {
   manager {
      department {
         eq 'name', 'department1'
      }
   }
}

and run it as

runCriteria Employee, [closure1, closure2], [max: 10, offset: 0]

But how can we make that projection generic? It’s an inner method call, wrapped in one or more closures that project down to another domain class.

What I ultimately want is to be able to specify a projection with an inner criteria call without closures:

def projection = buildProjection('manager.department',
                                 'eq', ['name', 'department1'])
runCriteria Employee, [closure1, projection], [max: 10, offset: 0]

Here’s the buildProjection method that does this:

Closure buildProjection(String path, String criterionMethod, List args) {

   def invoker = { String projectionName, Closure subcriteria ->
      delegate."$projectionName"(subcriteria)
   }

   def closure = { ->
      delegate."$criterionMethod"(args)
   }

   for (String projectionName in (path.split('\\.').reverse())) {
      closure = invoker.clone().curry(projectionName, closure)
   }

   closure
}

To understand how this works, look again at the innermost closure:

department {
   eq 'name', 'department1'
}

This will be invoked as a method call on the delegate, in effect

delegate.department({
   eq 'name', 'department1'
})

Groovy lets us call methods dynamically using GStrings, so this is the same as

String methodName = 'department'

delegate."$methodName"({
   eq 'name', 'department1'
})

So we can represent the nested closures as an inner closure invoked as the closure argument of its containing closure, and that invoked as the closure argument of its containing closure, and so on until we run out of levels.

And we can build a closure that calls eq 'name', 'department1' (or any criterion method with arguments, this is just a simplified example), as

def closure = { ->
   delegate."$criterionMethod"(args)
}

So to represent the nested closures, start with an ‘invoker’ closure:

def invoker = { String projectionName, Closure subcriteria ->
   delegate."$projectionName"(subcriteria)
}

and successively clone it at each nesting level, and curry it to embed the projection name and its inner closure since the criteria builder doesn’t expect any closure arguments, working from the inside out:

for (String projectionName in (path.split('\\.').reverse())) {
   closure = invoker.clone().curry(projectionName, closure)
}

So, finally we can run the decomposed query as one or more ‘core’ criteria closures with standard criterion method calls, plus zero or more derived projection closures:

def criteria = {
   eq 'name', 'employee1'
}
def projection = buildProjection('manager.department',
                                 'eq', ['name', 'department1'])

runCriteria Employee, [criteria, projection], [max: 10, offset: 0]

I doubt there’s a lot of reuse potential here to be honest, but working through this helped me to better understand how GORM runs criteria queries. I’ll be talking about this and some other GORM topics at Greach next month, so if you find this interesting be sure to check out the recording of that talk.

Using MongoDB With Version 2.x of the Grails Spring Security Core Plugin

Sunday, December 01st, 2013

With a few customization steps it’s easy to use MongoDB to store user and role information for the spring-security-core plugin instead of using Hibernate, and after seeing this Stack Overflow question I thought I’d write up some notes on how to do this with the current plugins. Note that much of this is based on this blog post.

I created a demo application using Grails 2.3.3; it’s available on GitHub. The general steps were:

  • $ grails create-app mongoSpringSecurity
  • add the plugins to BuildConfig.groovy
  • $ grails s2-quickstart auth User Role
  • update DataSource.groovy to use MongoDB
  • create a custom UserDetailsService and register it in resources.groovy
  • create a test role and a user in BootStrap.groovy
  • customize the domain classes to use MongoDB
  • add tags to index.gsp to add a login link if you’re not logged in, and show that you’re logged in if you are

One difference between what I do here and what was done in the original blog post is that the custom UserDetailsService is not a Grails service – it’s in src/groovy and not in grails-app/services. It wasn’t necessary to be a real service then and isn’t now; it’s a coincidence that the Spring Security interface name ends in “Service”. See the plugin documentation for general information about customizing this bean.

You can see the source for the custom bean here. By embedding the authorities in the user domain class, the many-to-many relationship is not needed and the model is a lot simpler, so the class implementation is also – for example there’s no need for a withTransaction block to avoid lazy loading exceptions.

The changes for the User class are fairly minor. You need static mapWith = 'mongo' if you have both the Hibernate and MongoDB plugins; in this case it’s unnecessary but harmless to leave it in. The id field should be an ObjectId, and I retained the other customizations from the earlier blog post (the embedded roles, the addition of the email field, extra constraints, etc.). The Role class changes are similar.

Since we’re using a custom UserDetailsService, we can delete the userLookup.userDomainClassName, userLookup.authorityJoinClassName, and authority.className properties from Config.groovy, and since the roles are embedded in the user class we can delete the generated UserRole class.

You should be able to clone the repo and start the application (assuming you have MongoDB and Grails 2.3.3 already). Click the login link on the start page and after you successfully authenticate, the link should be replaced by a message.

Updated Grails Database Migration plugin

Friday, January 04th, 2013

Edit: January 5 – I released the plugin that adds support for JAXB-based classes; see the plugin page and the documentation for more information.


One of the downsides to releasing a lot of plugins is lots of reported issues. I’ve joked that since there aren’t good ways to know how much use a plugin gets, the best metric is the number of reported bugs and feature requests, and that is mostly true. Using that logic the database-migration plugin is very popular 🙂

I try to address serious issues, but most of this plugin’s issue have to do with generated code. My attitude towards generated code is that it should not be trusted, and should rarely be expected to be completely correct. For example, when you use the dbm-gorm-diff or dbm-generate-gorm-changelog scripts, they do most of your work for you. My hope is that it saves you lots of time and that you shouldn’t need to do much work to fix any issues, but that you should expect issues.

When I did the What’s new with Grails 2.0 talk at NEJUG a year ago I mentioned this plugin and focused on the GORM-based scripts because I think they’re the best approach to creating migrations. But one of the attendees who also uses Rails said that Rails migrations were better because they have a DSL that you can use to write the migrations. I realized that I was so used to running dbm-gorm-diff that I had neglected to even mention the extensive Groovy DSL that the plugin supports (it’s a 100% clone of the XML syntax in native Liquibase). It’s a good DSL and you can create migrations completely by hand using it, but I can’t see why you would do that given how much you can get for free with the scripts. I mention this story to point out why I think it’s ironic when people complain that it’s tedious to have to fix invalid code that a script generated; feel free to use the DSL directly and forego the broken scripts 😉


The bug list for the database-migration plugin was getting a bit big and there were quite a few open pull requests. The tipping point however was seeing this tweet and realizing that I should spend some time on the plugin again.

The pull request that Zan mentioned in his tweet was a big one, adding support for doing migrations on multiple databases, mirroring the multi-datasource support in Grails 2.0. It would be great if all pull requests were this high-quality, including documentation updates and lots of tests. While I was integrating that (I had made some changes since then that required a traditional pull request since the Github UI wouldn’t do an automatic merge, and there were a few conflicts) I worked on the other outstanding issues.

I merged in all of the open pull requests – many thanks for those. I also closed a few bugs that weren’t real bugs or were duplicates, and fixed several others. That made for an interesting JIRA 30-day issue graph:

Many of the other reported issues were variants of the same problem where Liquibase was specifying the size of database columns that don’t support a size (for example bytea(255)). Hibernate does a much better job of this, so I was able to rework things so the Hibernate data types are used where possible instead of what Liquibase generates. So hopefully the generated changelogs will be much more accurate and involve less tweaking.

You can see the release notes of the 1.3 release here and the updated docs here.

Note that the latest version of the plugin is 1.3.1 since there were issues with the JAXB code that I included in the 1.3 release. I removed the code since it depends on Java 7 (and wasn’t completely finished) and will release it as a separate plugin.

Logging Hibernate SQL

Thursday, October 18th, 2012

There are two well-known ways to log Hibernate SQL in Grails; one is to add logSql = true in DataSource.groovy (either in the top-level block for all environments or per-environment)

dataSource {
   dbCreate = ...
   url = ...
   ...
   logSql = true
}

and the other is to use a Log4j logging configuration:

log4j = {
   ...
   debug 'org.hibernate.SQL'
}

The problem with logSql is that it’s too simple – it just dumps the SQL to stdout and there is no option to see the values that are being set for the positional ? parameters. The logging approach is far more configurable since you can log to the console if you want but you can configure logging to a file, to a file just for these messages, or any destination of your choice by using an Appender.

But the logging approach is problematic too – by enabling a second Log4j category

log4j = {
   ...
   debug 'org.hibernate.SQL'
   trace 'org.hibernate.type'
}

we can see variable values, but you see them both for PreparedStatement sets and for ResultSet gets, and the gets can result in massive log files full of useless statements. This works because the “Type” classes that Hibernate uses to store and load Java class values to database columns (for example LongType, StringType, etc.) are in the org.hibernate.type package and extend (indirectly) org.hibernate.type.NullableType which does the logging in its nullSafeSet and nullSafeGet methods.

So if you have a GORM domain class

class Person {
   String name
}

and you save an instance

new Person(name: 'me').save()

you’ll see output like this:

DEBUG hibernate.SQL  - insert into person (id, version, name) values (null, ?, ?)
TRACE type.LongType  - binding '0' to parameter: 1
TRACE type.StringType  - binding 'me' to parameter: 2
DEBUG hibernate.SQL  - call identity()

When you later run a query to get one or more instances

def allPeople = Person.list()

you’ll see output like this

DEBUG hibernate.SQL  - select this_.id as id0_0_, this_.version as version0_0_, this_.name as name0_0_ from person this_
TRACE type.LongType  - returning '1' as column: id0_0_
TRACE type.LongType  - returning '0' as column: version0_0_
TRACE type.StringType  - returning 'me' as column: name0_0_

This isn’t bad for one instance but if there were multiple results then you’d have a block for each result containing a line for each column.

I was talking about this yesterday at my Hibernate talk at SpringOne 2GX and realized that it should be possible to create a custom Appender that inspects log statements for these classes and ignores the statements resulting from ResultSet gets. To my surprise it turns out that everything has changed in Grails 2.x because we upgraded from Hibernate 3.3 to 3.6 and this problem has already been addressed in Hibernate.

The output above is actually from a 1.3.9 project that I created after I got unexpected output in a 2.1.1 application. Here’s what I saw in 2.1.1:

DEBUG hibernate.SQL  - 
    /* insert Person
        */ insert 
        into
            person
            (id, version, name) 
        values
            (null, ?, ?)

TRACE sql.BasicBinder  - binding parameter [1] as [BIGINT] - 0

TRACE sql.BasicBinder  - binding parameter [2] as [VARCHAR] - asd

and

DEBUG hibernate.SQL  -
    /* load Author */ select
        author0_.id as id1_0_,
        author0_.version as version1_0_,
        author0_.name as name1_0_
    from
        author author0_
    where
        author0_.id=?

TRACE sql.BasicBinder  - binding parameter [1] as [BIGINT] - 1

TRACE sql.BasicExtractor  - found [0] as column [version1_0_]

TRACE sql.BasicExtractor  - found [asd] as column [name1_0_]

So now instead of doing all of the logging from the types’ base class, it’s been reworked to delegate to org.hibernate.type.descriptor.sql.BasicBinder and org.hibernate.type.descriptor.sql.BasicExtractor. This is great because now we can change the Log4j configuration to

log4j = {
   ...
   debug 'org.hibernate.SQL'
   trace 'org.hibernate.type.descriptor.sql.BasicBinder'
}

and have our cake and eat it too; the SQL is logged to a configurable Log4j destination and only the PreparedStatement sets are logged.

Note that the SQL looks different in the second examples not because of a change in Grails or Hibernate but because I always enable SQL formatting (with format_sql) and comments (with use_sql_comments) in test apps so when I do enable logging it ends up being more readable, and I forgot to do that for the 1.3 app:

hibernate {
   cache.use_second_level_cache = true
   cache.use_query_cache = false
   cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
   format_sql = true
   use_sql_comments = true
}

Keeping Grails namedQueries DRY

Thursday, September 27th, 2012

The Named queries support in Grails is cool feature where you define a partial criteria query and can call it directly, or compose it with additional criteria to create complex queries. For example you might want to query for instances created before a specified date:

class Foo {
   String name
   Date dateCreated
   Integer bar

   static namedQueries = {
      createdBefore { date ->
         le 'dateCreated', date
      }
   }
}

Using this we can use just the named query:

def moreThanTwoDaysOld = Foo.createdBefore(new Date() - 2).list()

or further refine the query:

def moreThanTwoDaysOld = Foo.createdBefore(new Date() - 2) {
   lt 'bar', 20
}

But what if you have common named queries across various domain classes? It turns out it’s not hard to keep things DRY, but it’s a bit tricky.


To see how the solution works, it’s important to understand how the definion of the named queries is implemented. It’s a static Closure in a domain class, which contains methods whose names will end up being the names of the queries. This will be more clear if we add in the optional parentheses that were omitted above:

static namedQueries = {
   createdBefore({ date ->
      le 'dateCreated', date
   })
}

So it should hopefully be clear that this is an invocation of the createdBefore method, and its only argument is a Closure containing the various criteria statements. The Grails builder that parses the namedQueries blocks has method-missing support since there’s obviously no createdBefore method, and the method name and argument are used to build a named query. If we’re going to change where the named queries are defined, we need to continue to call these missing methods.

The approach I suggest is to define static methods in a helper class in src/groovy which return the Closures that define the queries. The method names don’t matter, but for consistency it’s best to use the same method name in the utility class as the named query’s name in the domain class:

package com.burtbeckwith.blog

class NamedQueries {

   static createdBefore() {{ date ->
      le 'dateCreated', date
   }}
}

The syntax is a bit weird since the method returns a Closure, so you end up with double braces.

Now update the domain class so the argument of the missing createdBefore method isn’t an inline Closure, but the one that the NamedQueries.createdBefore() method returns:

import com.burtbeckwith.blog.NamedQueries

class Foo {
   String name
   Date dateCreated
   Integer bar

   static namedQueries = {
      createdBefore NamedQueries.createdBefore()
   }
}

Now this named query can be reused in any domain class. The code that uses the queries doesn’t change at all, and if you later decide to modify the logic, you only need to change it in one place instead of every location that you copy/pasted it.


This post is a continuation of my earler Stuff I Learned Consulting post since I used this on that consulting engagement, but I thought that this topic was big enough that it deserved its own post.

Stuff I Learned Consulting

Wednesday, September 26th, 2012

I don’t do much Grails consulting since I work for the Engineering group, and we have an excellent group of support engineers that usually work directly with clients. I do occasionally teach the 3-day Groovy and Grails course but I’ve only been on two onsite consulting gigs so far, and one was a two-week engagement that ended last week. As is often the case when you teach something or help someone else out, I learned a lot and was reminded of a lot of stuff I’d forgotten about, so I thought it would be good to write some of that down for future reference.


SQL Logging

There are two ways to view SQL output from queries; adding logSql = true in DataSource.groovy and configuring Log4j loggers. The Log4j approach is a lot more flexible since it doesn’t just dump to stdout, and can be routed to a file or other appender and conveniently enabled and disabled. But it turns out it’s easy to toggle logSql SQL console logging. Get a reference to the sessionFactory bean (e.g. using dependency injection with def sessionFactory) and turn it on with

sessionFactory.settings.sqlStatementLogger.logToStdout = true

and off with

sessionFactory.settings.sqlStatementLogger.logToStdout = false

stacktrace.log

The stacktrace.log file was getting very large and they wanted to configure it to use a rolling file appender. Seemed simple enough, but it took a lot longer than I expected. The trick is to create an appender with the name 'stacktrace'; the Grails logic that parses the Log4j DSL looks for an existing appender and uses it, and only configures the default one if there isn’t one already configured. So here’s one that configures a RollingFileAppender with a maximum of 10 files, each a maximum of 10MB in size, and with the standard layout pattern. In addition it includes logic to determine if it’s deployed in Tomcat so it can write to the Tomcat logs folder, or the target folder if you’re using run-app.

If you’re deploying to a different container, adjust the log directory calculation appropriately.

appenders {
   String logDir = grails.util.Environment.warDeployed ?
                       System.getProperty('catalina.home') + '/logs' :
                       'target'
   rollingFile name: 'stacktrace',
               maximumFileSize: 10 * 1024 * 1024,
               file: "$logDir/stacktrace.log",
               layout: pattern(
                   conversionPattern: "'%d [%t] %-5p %c{2} %x - %m%n'"),
               maxBackupIndex: 10
}

Dynamic fooId property

In a many-to-one where you have a Foo foo field (or static belongsTo = [foo: Foo] which triggers adding a ‘foo’ field) you can access its foreign key with the dynamic fooId property. This can be used in a few ways. Since references like this are lazy by default, checking if a nullable reference exists using foo != null involves loading the entire instance from the database. But checking fooId != null involves no database access.

Other queries or updates that really only need the foreign key will be cheaper using fooId. For example, to set a reference in another instance you would typically use code like this:

bar2.foo = bar1.foo
bar2.save()

But you can use the load method

bar2.foo = bar1.fooId ? Foo.load(bar1.fooId) : null
bar2.save()

and avoid loading the Foo instance just to set its foreign key in the second instance and then discard it.

Deleting by id is less expensive too; ordinarily you use get to load an instance and call its delete method, but retrieving the entire instance isn’t needed. You can do this instead:

Foo.load(bar.fooId).delete()

DRY constraints

You can use the importFrom method inside a constraints block in a domain class to avoid repeating constraints. You can import all constraints from another domain class:

static constraints = {
   someProperty nullable: true
   ...
   importFrom SomeOtherDomainClass
}

and optionally use the include and/or exclude properties to use a subset:

static constraints = {
   someProperty nullable: true
   ...
   importFrom SomeOtherDomainClass, exclude: ['foo', 'bar']
}

Flush event listener

They were seeing some strange behavior where collections that weren’t explicitly modified were being changed and saved, causing StaleObjectStateExceptions. It wasn’t clear what was triggering this behavior, so I suggested registering a Hibernate FlushEventListener to log the state of the dirty instances and collections during each flush:

package com.burtbeckwith.blog

import org.hibernate.HibernateException
import org.hibernate.collection.PersistentCollection
import org.hibernate.engine.EntityEntry
import org.hibernate.engine.PersistenceContext
import org.hibernate.event.FlushEvent
import org.hibernate.event.FlushEventListener

class LoggingFlushEventListener implements FlushEventListener {

   void onFlush(FlushEvent event) throws HibernateException {
      PersistenceContext pc = event.session.persistenceContext

      pc.entityEntries.each { instance, EntityEntry value ->
         if (instance.dirty) {
            println "Flushing instance $instance"
         }
      }

      pc.collectionEntries.each { PersistentCollection collection, value ->
         if (collection.dirty) {
            println "Flushing collection '$collection.role' $collection"
         }
      }
   }
}

It’s not sufficient in this case to use the standard hibernateEventListeners map (described in the docs here) since that approach adds your listeners to the end of the list, and this listener needs to be at the beginning. So instead use this code in BootStrap.groovy to register it:

import org.hibernate.event.FlushEventListener
import com.burtbeckwith.blog.LoggingFlushEventListener

class BootStrap {

  def sessionFactory

  def init = { servletContext ->

    def listeners = [new LoggingFlushEventListener()]
    def currentListeners = sessionFactory.eventListeners.flushEventListeners
    if (currentListeners) {
      listeners.addAll(currentListeners as List)
    }
    sessionFactory.eventListeners.flushEventListeners =
            listeners as FlushEventListener[]
  }
}

“Read only” objects and Sessions

The read method was added to Grails a while back, and it works like get except that it marks the instance as read-only in the Hibernate Session. It’s not really read-only, but if it is modified it won’t be a candidate for auto-flushing using dirty detection. But you can explicitly call save() or delete() and the action will succeed.

This can be useful in a lot of ways, and in particular it is more efficient if you won’t be changing the instance since Hibernate will not maintain a copy of the original database data for dirty checking during the flush, so each instance will use about half of the memory that it would otherwise.

One limitation of the read method is that it only works for instances loaded individually by id. But there are other approaches that affect multiple instances. One is to make the entire session read-only:

session.defaultReadOnly = true

Now all loaded instances will default to read-only, for example instances from criteria queries and finders.

A convenient way to access the session is the withSession method on an arbitrary domain class:

SomeDomainClass.withSession { session ->
   session.defaultReadOnly = true
}

It’s rare that an entire session will be read-only though. You can set the results of individual criteria query to be read-only with the setReadOnly method:

def c = Account.createCriteria()
def results = c {
   between("balance", 500, 1000)
   eq("branch", "London")
   maxResults(10)
   setReadOnly true
}

One significant limitation of this technique is that attached collections are not affected by the read-only status of the owning instance (and there doesn’t seem to be a way to configure collection to ignore changes on a per-instance basis).

Read more about this in the Hibernate documentation

Updates for “Delayed SessionFactory Creation in Grails”

Wednesday, September 26th, 2012

Back in the beginning of 2010 I did a post on how to delay creating the SessionFactory based on a discussion on the User mailing list. This has come up again and I thought I’d look and see if things had changed for Grails 2.

The general problem is the same as it was; Grails and Hibernate create database connections during startup to help with configuration, so the information that is auto-discovered has to be explicitly specified. In addition any eager initialization that can wait should wait.

One such configuration item is the lobHandlerDetector bean. This hasn’t changed from before, so the approach involves specifying the bean yourself (and it’s different depending on whether you’re using Oracle or another database). Since it’s the same I won’t include the details here; see the previous post.

Another is the Dialect. Again, this is the same as before – just specify it in DataSource.groovy. This is a good idea in general since there might be particular features you need in a non-default Dialect class, and specifying org.hibernate.dialect.MySQL5InnoDBDialect for MySQL guarantees that you’ll be using transactional InnoDB tables instead of non-transactional MyISAM tables.

The remaining issues have to do with eager initialization. I started down the path of reworking how to lazily initialize the SessionFactory since using a Spring bean post-processor is significantly less involved (and brittle) than the approach I had previously used. But it turns out that the more recent version of Hibernate that we’re now using supports a flag that avoids database access during SessionFactory initialization, hibernate.temp.use_jdbc_metadata_defaults. So add this to the hibernate block in DataSource.groovy:

hibernate {
   ...
   temp.use_jdbc_metadata_defaults = false
}

And the last issue is the DataSource itself. Up to this point all of the changes will avoid getting a connection, but the pool might pre-create connections at startup. The default implementation in Grails is org.apache.commons.dbcp.BasicDataSource and its initial size is 0, so you’re ok if you haven’t configured a different implementation. If you have, be sure to set its initial size to 0 (this isn’t part of the DataSource so the setter method is implementation-specific if it even exists).


If you’re using multiple datasources, you can delay their database connectivity too. There is a lobHandlerDetector bean for each datasource, so for example if you have a second one with the name “ds2”, configure a lobHandlerDetector_ds2 bean in resources.groovy. Likewise for the Dialect; specify it in the dataSource_ds2 block in DataSource.groovy. Set the use_jdbc_metadata_defaults option in the hibernate_ds2 block:

hibernate_ds2 {
   ...
   temp.use_jdbc_metadata_defaults = false
}

And finally, as for the single-datasource case, if you’ve reconfigured secondary datasource beans’ pool implementations, set their initial sizes to 0.

Hibernate Bags in Grails 2.0

Monday, November 14th, 2011

When I’ve talked in the past about collection mapping in Grails (you can see a video of a SpringOne/2GX talk here) I mentioned that the current approach of using Sets or Lists is problematic and provided workarounds. I mentioned at the time that Hibernate has support for Bags which don’t enforce uniqueness or order like Sets and Lists do, so if GORM supported Bags we could just use those. So I added support for Bags to GORM for Grails 2.0 and thought that was that.

I thought it’d be interesting to demo this at my GORM talk at this year’s SpringOne/2GX but when I created a small test application it wasn’t working like I remembered. In fact it was actually worse than the problems I was working around. So I put that away with a mental note to get back to this soon, and before 2.0 final is released.

It turns out there’s good news and bad news. The good news is that it’s not completely broken. The bad news is that it’s mostly broken.


First the good news. If you have a one-to-many that doesn’t use a join table, using a Bag works mostly as expected. As an example, consider an Author/Book mapping where a book has one author, and an author can have many books:

class Author {
   String name
   Collection books
   static hasMany = [books: Book]
}
class Book {
   String title
   static belongsTo = [author: Author]
}

Using the Map syntax for the belongsTo mapping is the key to avoiding the join table and relating the tables with a foreign key from the book table to the author table. If you run grails schema-export the output will be something like

create table author (
   id bigint generated by default as identity,
   version bigint not null,
   name varchar(255) not null,
   primary key (id)
);

create table book (
   id bigint generated by default as identity,
   version bigint not null,
   author_id bigint not null,
   title varchar(255) not null,
   primary key (id)
);

alter table book add constraint FK2E3AE9CD85EDFA
foreign key (author_id) references author;

If you run this initializing code in a Grails console with SQL logging enabled (add logSql = true in DataSource.groovy)

def author = new Author(name: 'Hunter S. Thompson')
author.addToBooks(title: 'Fear and Loathing in Las Vegas')
author.save()

you’ll see output like this:

insert into author (id, version, name) values (null, ?, ?)

insert into book (id, version, author_id, title) values (null, ?, ?, ?)

update author set version=?, name=? where id=? and version=?

which is ok; it inserts the author and the book, although it bumps the version of the Author. I’ll come back to that.

If you run this updating code:

def author = Author.get(1)
author.addToBooks(title: "Hell's Angels: A Strange and Terrible Saga")
author.save()

you’ll see output like this:

select author0_.id as id0_0_, author0_.version as version0_0_,
author0_.name as name0_0_ from author author0_ where author0_.id=?

insert into book (id, version, author_id, title) values (null, ?, ?, ?)

update author set version=?, name=? where id=? and version=?

This is also basically ok – it loads the author, inserts the book, and versions the author.

If you map the belongsTo with the non-map syntax (static belongsTo = Author) you’ll get this DDL:

create table author (
   id bigint generated by default as identity,
   version bigint not null,
   name varchar(255) not null,
   primary key (id)
);

create table author_book (
   author_books_id bigint,
   book_id bigint
);

create table book (
   id bigint generated by default as identity,
   version bigint not null,
   title varchar(255) not null,
   primary key (id)
);

alter table author_book add constraint FK2A7A111D3FA913A
foreign key (book_id) references book;

alter table author_book add constraint FK2A7A111DC46A00AF
foreign key (author_books_id) references author;

and running the initializing code above will result in output that’s similar to before, with the addition of inserting into the join table:

insert into author (id, version, name) values (null, ?, ?)

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

insert into author_book (author_books_id, book_id) values (?, ?)

but running the updating code results in this:

select author0_.id as id4_0_, author0_.version as version4_0_,
author0_.name as name4_0_ from author author0_ where author0_.id=?

select books0_.author_books_id as author1_4_0_, books0_.book_id as
book2_0_ from author_book books0_ where books0_.author_books_id=?

select book0_.id as id3_0_, book0_.version as version3_0_,
book0_.title as title3_0_ from book book0_ where book0_.id=?

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

delete from author_book where author_books_id=?

insert into author_book (author_books_id, book_id) values (?, ?)

insert into author_book (author_books_id, book_id) values (?, ?)

This is not good. It reads the author, then all of the books for that author (the part we’re trying to avoid), inserts the book, and then deletes every row from the join table for this author, and re-inserts rows for each element in the Bag. Ouch.


If you convert the relationship to a many-to-many with Bags on both sides:

class Author {
   String name
   Collection books
   static hasMany = [books: Book]
}
class Book {
   String title
   Collection authors
   static hasMany = [authors: Author]
   static belongsTo = Author
}

and run this initializing code:

def author = new Author(name: 'Hunter S. Thompson')
author.addToBooks(title: 'Fear and Loathing in Las Vegas')
author.save()

you get this output:

insert into author (id, version, name) values (null, ?, ?)

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

update book set version=?, title=? where id=? and version=?

insert into author_books (author_id, book_id) values (?, ?)

It inserts the author and the book, then versions both rows, and inserts a row into the join table.

If you run this updating code:

def author = Author.get(1)
author.addToBooks(title: "Hell's Angels: A Strange and Terrible Saga")
author.save()

then the output is similar to the output for one-to-many with a join table:

select author0_.id as id0_0_, author0_.version as version0_0_,
author0_.name as name0_0_ from author author0_ where author0_.id=?

select books0_.author_id as author1_0_0_, books0_.book_id as book2_0_
from author_books books0_ where books0_.author_id=?

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

update book set version=?, title=? where id=? and version=?

delete from author_books where author_id=?

insert into author_books (author_id, book_id) values (?, ?)

insert into author_books (author_id, book_id) values (?, ?)

It loads the author, then all of the book ids from the join table (to create proxies, which are lighter-weight than full domain class instances but there will still be N of them in memory), then inserts the new book, versions both rows, and again deletes every row from the join table and reinserts them. Ouch again.


So for the two cases where there are join tables, we have a problem. Hibernate doesn’t worry about duplicates or order in-memory, but the join tables can’t have duplicate records, so it has to pessimistically clear the data and reinsert it. This has all of the negatives of the non-Bag approach and adds another big one.

Even in the first case I described where there’s no join table, there’s still a problem. Since the Author’s version gets incremented when you add a Book (you’re editing a property of the Author, so it’s considered to be updated even though it’s a collection pointing to another table) there’s a high risk that concurrently adding child instances will cause optimistic locking exceptions for the Author, even though you just want to insert rows into the book table. And this is the case for all three scenarios.


So I guess I’m back to advocating the approach from my earlier talks; don’t map a collection of Books in the Author class, but add an Author field to the Book class instead:

class Author {
   String name
}
class Book {
   String title
   Author author
}

And for many-to-many case map the “author_books” table with a domain class:

class Author {
   String name
}
class Book {
   String title
}
class AuthorBook {
   Author author
   Book book
   ...
}

Customizing GORM with a Configuration Subclass

Tuesday, December 28th, 2010

GORM mappings let you configure pretty much anything you need in your Grails applications, but occasionally there are more obscure tweaks that aren’t directly supported, and in this case a custom Configuration class is often the solution.

By default Grails uses an instance of GrailsAnnotationConfiguration and the standard approach is to subclass that to retain its functionality and override the secondPassCompile() method.

As an example, let’s look at what is required to specify the foreign key name between two related domain classes. This is inspired by this mailing list question but is also a personal pet peeve since I always name foreign keys in traditional Hibernate apps (using annotations or hbm.xml files). FK_USER_COUNTRY is a lot more useful than FK183C3385A9B72.

One restriction is that you need to code the class in Java – a Groovy class won’t compile due to method visibility issues.

Create this class (with an appropriate name and package for your application) in src/java:

package com.yourcompany.yourapp;

import java.util.Collection;
import java.util.Iterator;

import org.codehaus.groovy.grails.orm.hibernate.cfg.GrailsAnnotationConfiguration;
import org.hibernate.MappingException;
import org.hibernate.mapping.ForeignKey;
import org.hibernate.mapping.PersistentClass;
import org.hibernate.mapping.RootClass;

public class MyConfiguration extends GrailsAnnotationConfiguration {

   private static final long serialVersionUID = 1;

   private boolean _alreadyProcessed;

   @SuppressWarnings({"unchecked", "rawtypes"})
   @Override
   protected void secondPassCompile() throws MappingException {
      super.secondPassCompile();

      if (_alreadyProcessed) {
         return;
      }

      for (PersistentClass pc : (Collection<PersistentClass>)classes.values()) {
         if (pc instanceof RootClass) {
            RootClass root = (RootClass)pc;
            if ("com.yourcompany.yourapp.User".equals(root.getClassName())) {
               for (Iterator iter = root.getTable().getForeignKeyIterator();
                       iter.hasNext();) {
                  ForeignKey fk = (ForeignKey)iter.next();
                  fk.setName("FK_USER_COUNTRY");
               }
            }
         }
      }

      _alreadyProcessed = true;
   }
}

This is a very simplistic example and everything is hard-coded. A real example would check that the foreign key exists, that it’s the correct one, etc., or might be more sophisticated and automatically rename all foreign keys using the FK_ prefix and using the table names of the two related tables.

This won’t be automatically used, but you just need to set the configClass property in grails-app/conf/DataSource.groovy:

dataSource {
   pooled = true
   driverClassName = '...'
   username = '...'
   password = '...'
   configClass = 'com.yourcompany.yourapp.MyConfiguration'
}

For other examples of using this approach, see these posts in the Nabble archive:

Grails Binary Artifacts Plugin

Monday, December 13th, 2010

Last week at the Boston Grails Meetup we were talking about the state of Grails plugins (the plugin collective, certified plugins, etc.) and the idea of binary plugins came up. This has been discussed as a way to deploy closed-source plugins – with the current approach the plugin zip file contains all of the source code and the plugin descriptor, and there’s no way to use compiled classes as controllers, services, etc. I suppose it’s because of the work I did with the Dynamic Controller plugin (used by the App Info plugin) and the dynamic domain class stuff that I worked on (which is used in the Dynamic Domain Class plugin) but this stuck in my head and I ended up spending most of last weekend working on a plugin for this and finished it up this weekend.

The approach I took is to write a plugin (binary-artifacts) that proxies your plugin(s) which contain compiled classes instead of source code. Grails has some fixed rules about what it means for a class to be an artifact of a specific type, so these needed to be worked around, but there really aren’t any hacks here. For example a service must be under the grails-app/services folder and the name must end in Service, and controllers, taglibs, filters, have similar rules. Domain classes have no name restrictions but must be under grails-app/domain.

Instead of using these conventions, artifacts are configured with a properties file which must be named <appname>-binaryartifacts.properties and be in the classpath (so it’s best to put it in grails-app/conf or src/java).

Plugin descriptor

The plugin descriptor is created by the create-plugin script and contains several plugin properties (version, grailsVersion, author, title, etc.) and six closures (all optional) that are called either at startup during a particular initialization phase (doWithWebDescriptor, doWithSpring, doWithDynamicMethods, and doWithApplicationContext) or when the configuration or a watched resource is modified in dev mode (onChange and onConfigChange).

When using binary artifacts you still specify the property values the traditional way and can implement any of the six closures inline, but you can instead register a plugin ‘stub’ that’s called at each phase. This is a Groovy class containing any or all of the six supported closures. It will be instantiated and each closure called with the appropriate delegate set, so you can put Spring bean creation, metaclass enhancements, etc. in a compiled Groovy class. Register the class name in the properties file using the stub key.

Be sure to register a dependency on this plugin in your plugin descriptor, e.g.

def dependsOn = ['binaryArtifacts': '1.0 > *']

Codecs

Codecs can be written in Groovy or Java and must follow the standard naming convention. Code them just like you do traditional codecs, but put the source under src/groovy or src/java and register the comma-delimited class names in the properties file using the codecs key.

Controllers

Controllers must be written in Groovy (since they’re implemented with Closures) and must follow the standard naming convention. Code them just like you do traditional controllers, but put the source under src/groovy and register the comma-delimited class names in the properties file using the controllers key.

Domain classes

Domain classes must be written in Groovy (to support the mapping and constraints closures). Code them just like you do traditional domain classes, but put the source under src/groovy and register the comma-delimited class names in the properties file using the domainClasses key.

Filters

Filters must be written in Groovy (since they’re implemented with Closures) but you can use whatever class names you want. Code them just like you do traditional filter classes, but put the source under src/groovy and register the comma-delimited class names in the properties file using the filters key.

Services

Services can be written in Groovy or Java (but if you’re doing database work with GORM classes it’ll be a lot more convenient to use Groovy) but you can use whatever class names you want. Code them just like you do traditional services, but put the source under src/groovy or src/java and register the comma-delimited class names in the properties file using the services key.

In addition you have flexibility with the Spring bean name that the service is registered under. Traditionally FooService is registered as fooService but using this plugin you can use whatever valid name you want. Register the class name and Spring bean name using the classname:beanname syntax in the properties file.

Taglibs

Taglibs must be written in Groovy (since they’re implemented with Closures) but you can use whatever class names you want. Code them just like you do traditional taglib classes, but put the source under src/groovy and register the comma-delimited class names in the properties file using the taglibs key.

Scripts

Scripts are still implemented as Gant scripts under the scripts folder, but you can delegate the execution of a target to a compiled class. Include the plugin’s _RunBinaryScript.groovy script and do whatever initialization you need (e.g. using depends()) and call runBinaryScript(classname). The specified class will be instantiated and must have a closure named executeClosure. The closure’s delegate will be configured from the calling script’s delegate and invoked.

GSPs

GSP support isn’t 100% complete since compiled GSPs aren’t supported in Grails in run-app. This means that you can either keep your GSPs under the grails-app/views folder and ship the source, or compile them (this is handled by the plugin) but then they’ll only work when deployed in a war.

Code them just like you do traditional GSPs, and put the source wherever you want. If you want them to work in run-app mode keep them under grails-app/views, but if you’re precompiling then you can put them elsewhere and configure the location using the com.burtbeckwith.binaryartifacts.gspFolder config option in Config.groovy

You don’t register individual GSPs in the properties file; instead the GSP precompilation script creates a properties file that is specified. The script will be named <appname>-gsp-views.properties and you specify that name under the gspPropertyFile.

package-binary-plugin

Instead of running package-plugin like with a traditional plugin, you must use package-binary-plugin instead. The script doesn't take any arguments, and it compiles your code (and GSPs if configured) and configures the required properties file, and zips the plugin in the same format as a tradtional plugin. If you unpack the zip file you'll notice that the compiled classes are kept in a jar file in the lib folder.

Configuration

There are only two configuration options that control plugin behavior and they're both for GSPs. Both are set in Config.groovy. The first is com.burtbeckwith.binaryartifacts.gspFolder and it allows you to override the default location of your GSPs (grails-app/views). The other is com.burtbeckwith.binaryartifacts.compileGsp which can be used to disable precompilation of GSPs (defaults to true).


You can download a sample plugin that demonstrates the process. It has two domain classes, two services (one in Groovy and one in Java), a codec (in Java, although Groovy works too), a controller and two GSPs, a filters class, and a script. The plugin stub registers a context and session listener in web.xml and adds a bark method to String's metaclass.

Package the plugin by calling grails package-binary-plugin (or use the included build.xml) and install it the usual way, e.g. grails install-plugin /path/to/grails-binary-artifacts-test-0.1.zip.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.