Archive for the Category 'hibernate'

Building GORM Criteria Queries Dynamically

Monday, March 21st, 2016

I originally wrote most of the queries in the spring-security-ui plugin using HQL because I find it more intuitive than criteria queries, but HQL only works with Hibernate and relational databases. A pull request updated the queries to use criteria queries to allow the plugin to be used with NoSQL datastores, but one query didn’t fit the programming style that I was using. That wasn’t a big deal, but since a lot of the controller code is basically CRUD code and very similar to the others, I’ve tried to keep the code generic and push shared logic into the base classes.

The original HQL included this

hql.append " AND e.aclObjectIdentity.aclClass.id=:aclClass"

and the converted criteria code was

aclObjectIdentity {
   aclClass {
      eq 'id', params.long('aclClass')
   }
}

with the whole query being similar to this:

def results = lookupClass().createCriteria().list(max: max, offset: offset) {
   // other standard criteria method calls

   if (params.aclClass) {
      aclObjectIdentity {
         aclClass {
            eq 'id', params.long('aclClass')
         }
      }
   }
}

That got me thinking about creating a way to represent that two-level projection and criterion generically.

If we restore the omitted optional parentheses the code becomes

aclObjectIdentity({
   aclClass({
      eq('id', params.long('aclClass'))
   })
})

So it should be more clear that this is a sequence of method calls; calling aclObjectIdentity with a closure argument, then aclClass with a closure argument, and finally eq with a String and a long argument. Splitting out the closures as local variables makes it more clear, first as

def aclClassClosure = {
   eq('id', params.long('aclClass'))
}

aclObjectIdentity({
   aclClass(aclClassClosure)
})

and then

def aclClassClosure = {
   eq 'id', params.long('aclClass')
}

def aclObjectIdentityClosure = {
   aclClass(aclClassClosure)
}

aclObjectIdentity(aclObjectIdentityClosure)

To make this a bit more concrete, lets say we have three domain classes;

Department:

class Department {
   String name
}

Manager:

class Manager {
   String name
   Department department
}

and Employee:

class Employee {
   String name
   Manager manager
}

We create some instances:

Department d = new Department(name: 'department1').save()
Manager m = new Manager(name: 'manager1', department: d).save()
Employee e1 = new Employee(name: 'employee1', manager: m).save()
Employee e2 = new Employee(name: 'employee2', manager: m).save()

and later want to run a query:

Employee.createCriteria().list(max: 10, offset: 0) {
   eq 'name', 'employee1'

   manager {
      department {
         eq 'name', 'department1'
      }
   }
}

My goal is to represent this query with only some helper methods and without any closures (or as few as possible). Splitting that out like above we have

def departmentClosure = {
   eq 'name', 'department1'
}

def managerClosure = {
   department(departmentClosure)
}

def criteriaClosure = {
   eq 'name', 'employee1'

   manager(managerClosure)
}

Employee.createCriteria().list([max: 10, offset: 0], criteriaClosure)

When the query is run, the delegate of criteriaClosure is set to an instance of HibernateCriteriaBuilder when using Hibernate, or an analogous builder for MongoDB or whatever other GORM implementation you’re using. The builder has defined methods for eq, like, between, etc., so when you make those calls in your criteria closure they’re run on the builder.

It turns out that it works the same way if you split the closure into multiple closures and call them with the builder as the delegate for each. So a method like this works:

def runCriteria(Class clazz, List<Closure> criterias, Map paginateParams) {
   clazz.createCriteria().list(paginateParams) {
      for (Closure criteria in criterias) {
         criteria.delegate = delegate
         criteria()
      }
   }
}

and that means that we can split

Employee.createCriteria().list(max: 10, offset: 0) {
   eq 'name', 'employee1'

   manager {
      department {
         eq 'name', 'department1'
      }
   }
}

into

def closure1 = {
   eq 'name', 'employee1'
}

def closure2 = {
   manager {
      department {
         eq 'name', 'department1'
      }
   }
}

and run it as

runCriteria Employee, [closure1, closure2], [max: 10, offset: 0]

But how can we make that projection generic? It’s an inner method call, wrapped in one or more closures that project down to another domain class.

What I ultimately want is to be able to specify a projection with an inner criteria call without closures:

def projection = buildProjection('manager.department',
                                 'eq', ['name', 'department1'])
runCriteria Employee, [closure1, projection], [max: 10, offset: 0]

Here’s the buildProjection method that does this:

Closure buildProjection(String path, String criterionMethod, List args) {

   def invoker = { String projectionName, Closure subcriteria ->
      delegate."$projectionName"(subcriteria)
   }

   def closure = { ->
      delegate."$criterionMethod"(args)
   }

   for (String projectionName in (path.split('\\.').reverse())) {
      closure = invoker.clone().curry(projectionName, closure)
   }

   closure
}

To understand how this works, look again at the innermost closure:

department {
   eq 'name', 'department1'
}

This will be invoked as a method call on the delegate, in effect

delegate.department({
   eq 'name', 'department1'
})

Groovy lets us call methods dynamically using GStrings, so this is the same as

String methodName = 'department'

delegate."$methodName"({
   eq 'name', 'department1'
})

So we can represent the nested closures as an inner closure invoked as the closure argument of its containing closure, and that invoked as the closure argument of its containing closure, and so on until we run out of levels.

And we can build a closure that calls eq 'name', 'department1' (or any criterion method with arguments, this is just a simplified example), as

def closure = { ->
   delegate."$criterionMethod"(args)
}

So to represent the nested closures, start with an ‘invoker’ closure:

def invoker = { String projectionName, Closure subcriteria ->
   delegate."$projectionName"(subcriteria)
}

and successively clone it at each nesting level, and curry it to embed the projection name and its inner closure since the criteria builder doesn’t expect any closure arguments, working from the inside out:

for (String projectionName in (path.split('\\.').reverse())) {
   closure = invoker.clone().curry(projectionName, closure)
}

So, finally we can run the decomposed query as one or more ‘core’ criteria closures with standard criterion method calls, plus zero or more derived projection closures:

def criteria = {
   eq 'name', 'employee1'
}
def projection = buildProjection('manager.department',
                                 'eq', ['name', 'department1'])

runCriteria Employee, [criteria, projection], [max: 10, offset: 0]

I doubt there’s a lot of reuse potential here to be honest, but working through this helped me to better understand how GORM runs criteria queries. I’ll be talking about this and some other GORM topics at Greach next month, so if you find this interesting be sure to check out the recording of that talk.

Updated Grails Database Migration plugin

Friday, January 04th, 2013

Edit: January 5 – I released the plugin that adds support for JAXB-based classes; see the plugin page and the documentation for more information.


One of the downsides to releasing a lot of plugins is lots of reported issues. I’ve joked that since there aren’t good ways to know how much use a plugin gets, the best metric is the number of reported bugs and feature requests, and that is mostly true. Using that logic the database-migration plugin is very popular 🙂

I try to address serious issues, but most of this plugin’s issue have to do with generated code. My attitude towards generated code is that it should not be trusted, and should rarely be expected to be completely correct. For example, when you use the dbm-gorm-diff or dbm-generate-gorm-changelog scripts, they do most of your work for you. My hope is that it saves you lots of time and that you shouldn’t need to do much work to fix any issues, but that you should expect issues.

When I did the What’s new with Grails 2.0 talk at NEJUG a year ago I mentioned this plugin and focused on the GORM-based scripts because I think they’re the best approach to creating migrations. But one of the attendees who also uses Rails said that Rails migrations were better because they have a DSL that you can use to write the migrations. I realized that I was so used to running dbm-gorm-diff that I had neglected to even mention the extensive Groovy DSL that the plugin supports (it’s a 100% clone of the XML syntax in native Liquibase). It’s a good DSL and you can create migrations completely by hand using it, but I can’t see why you would do that given how much you can get for free with the scripts. I mention this story to point out why I think it’s ironic when people complain that it’s tedious to have to fix invalid code that a script generated; feel free to use the DSL directly and forego the broken scripts 😉


The bug list for the database-migration plugin was getting a bit big and there were quite a few open pull requests. The tipping point however was seeing this tweet and realizing that I should spend some time on the plugin again.

The pull request that Zan mentioned in his tweet was a big one, adding support for doing migrations on multiple databases, mirroring the multi-datasource support in Grails 2.0. It would be great if all pull requests were this high-quality, including documentation updates and lots of tests. While I was integrating that (I had made some changes since then that required a traditional pull request since the Github UI wouldn’t do an automatic merge, and there were a few conflicts) I worked on the other outstanding issues.

I merged in all of the open pull requests – many thanks for those. I also closed a few bugs that weren’t real bugs or were duplicates, and fixed several others. That made for an interesting JIRA 30-day issue graph:

Many of the other reported issues were variants of the same problem where Liquibase was specifying the size of database columns that don’t support a size (for example bytea(255)). Hibernate does a much better job of this, so I was able to rework things so the Hibernate data types are used where possible instead of what Liquibase generates. So hopefully the generated changelogs will be much more accurate and involve less tweaking.

You can see the release notes of the 1.3 release here and the updated docs here.

Note that the latest version of the plugin is 1.3.1 since there were issues with the JAXB code that I included in the 1.3 release. I removed the code since it depends on Java 7 (and wasn’t completely finished) and will release it as a separate plugin.

The Grails app-info-hibernate plugin

Wednesday, November 28th, 2012

The original app-info plugin had support for displaying lots of information about your Grails application, and several pages for Hibernate information and graphs. The Hibernate features ended up being about half of the plugin, so originally I wanted to split out the Hibernate features into a separate plugin. This didn’t work because I wasn’t able to get the GSPs rendered; at the time it wasn’t possible to use a plugin attribute for the render method to tell Grails where to find the controller mixin’s GSPs.

When Grails 2.0 was released my hand was forced though, since there wasn’t a version of the Hibernate Tools library that I use to generate table and entity graphs which worked with the updated version of Hibernate that Grails now uses. I was able to create a mostly-working version of the db-reverse-engineer plugin which also uses Hibernate Tools by forking the Gant script in its own JVM and using a different Hibernate jar, but that wasn’t possible in the app-info plugin because the functionality is part of the runtime, not just a script. So I removed the Hibernate features with plans to create an app-info-hibernate plugin once there was a compatible Hibernate Tools jar; I wrote about this here.

Fortunately there is finally a “CR1” version of the Hibernate Tools library in Maven Central, and in my testing I discovered that the plugin attribute does work in the Grails 2.0 render method, so I finished up the work for the plugin and released it today. I also released an update of the db-reverse-engineer plugin which uses the updated library and no longer needs the hackish workaround of forking a new process; install version 0.5 by adding compile ':db-reverse-engineer:0.5' to your BuildConfig.groovy.


Using the plugin is very similar to what I described in the original blog post. Add the plugin to BuildConfig.groovy:

plugins {
   ...

   compile ':app-info-hibernate:0.2'
}

(that there’s no need to add the app-info plugin since it will be transitively installed) and configure the grails.plugins.dynamicController.mixins map in Config.groovy:

grails.plugins.dynamicController.mixins = [
   'com.burtbeckwith.grails.plugins.appinfo.IndexControllerMixin':
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.Log4jControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.SpringControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.MemoryControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.PropertiesControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.ScopesControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.ThreadsControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'com.burtbeckwith.grails.plugins.appinfo.hibernate.HibernateControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController',

   'app.info.custom.example.MyConfigControllerMixin' :
      'com.burtbeckwith.appinfo_test.AdminManageController'
]

One thing to be aware of is that the HibernateControllerMixin package has changed; it’s now in the com.burtbeckwith.grails.plugins.appinfo.hibernate package.

Note that due to some issues in the updated grails.org site, the app-info plugin page isn’t editable, so it’s out of date, and there’s no plugin page yet for the app-info-hibernate plugin. It will be at http://grails.org/plugin/app-info-hibernate when the issues are resolved. You can install the plugin, it’s just not viewable in the plugin portal.


You can download a sample application that uses the plugin here.

Grails SQL Logging part 2 – groovy.sql.Sql

Wednesday, October 31st, 2012

I discussed options for logging Hibernate-generated SQL in an earlier post but today I was trying to figure out how to see the SQL from groovy.sql.Sql and didn’t have much luck at first. The core problem is that the Sql class uses a java.util.logging.Logger (JUL) while the rest of the world uses a Log4j logger (often with a Commons Logging or SLF4J wrapper). I assumed that since I am using the Grails support for JUL -> Log4j bridging (enabled with the grails.logging.jul.usebridge = true setting in Config.groovy) that all I needed to do was add the class name to my log4j DSL block:

log4j = {
   error 'org.codehaus.groovy.grails',
         'org.springframework',
         'org.hibernate',
         'net.sf.ehcache.hibernate'
   debug 'groovy.sql.Sql'
}

but that didn’t work. Some googling led to this mailing list discussion which has a solution involving a custom java.util.logging.Handler to pipe JUL log messages for the 'groovy.sql.Sql' logger to Log4j. That seemed like overkill to me since theoretically that’s exactly what grails.logging.jul.usebridge = true already does. I realized I had no idea how the bridging worked, so I started looking at the implementation of this feature.

It turns out that this is handled by the Grails “logging” plugin (org.codehaus.groovy.grails.plugins.log4j.LoggingGrailsPlugin) which calls org.slf4j.bridge.SLF4JBridgeHandler.install(). This essentially registers a listener that receives every JUL log message and pipes it to the corresponding SLF4J logger (typically wrapping a Log4j logger) with a sensible mapping of the different log levels (e.g. FINEST -> TRACE, FINER -> DEBUG, etc.)

So what’s the problem then? While grails.logging.jul.usebridge = true does configure message routing, it doesn’t apply level settings from the log4j block to the corresponding JUL loggers. So although I set the level of 'groovy.sql.Sql' to debug, the JUL logger level is still at the default level (INFO). So all I need to do is programmatically set the logger’s level to DEBUG (or TRACE to see everything) once, e.g. in BootStrap.groovy

import groovy.sql.Sql
import java.util.logging.Level

class BootStrap {

   def init = { servletContext ->
      Sql.LOG.level = Level.FINE
   }
}

Autodiscovery of JPA-annotated domain classes in Grails

Wednesday, October 24th, 2012

There are some issues to be fixed with the support for adding JPA annotations (for example @Entity) to Groovy classes in grails-app/domain in 2.0. This is due to the changes made to adding most GORM methods to the domain class bytecode with AST transformations instead of adding them to the metaclass at runtime with metaprogramming. There is a workaround – put the classes in src/groovy (or write them in Java and put them in src/java).

This adds a maintenance headache though because by being in grails-app/domain the classes are automatically discovered, but there’s no scanning of src/groovy or src/java for annotated classes so they must be explicitly listed in grails-app/conf/hibernate/hibernate.cfg.xml. We do support something similar with the ability to annotate Groovy and Java classes with Spring bean annotations like @Component and there is an optional property grails.spring.bean.packages in Config.groovy that can contain one or more packages names to search. We configure a Spring scanner that looks for annotated classes and automatically registers them as beans. So that’s what we need for JPA-annotated src/groovy and src/java classes.

It turns out that there is a Spring class that does this, org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean. It extends the standard SessionFactory factory bean class org.springframework.orm.hibernate3.LocalSessionFactoryBean and adds support for an explicit list of class names to use and also a list of packages to scan. Unfortunately the Grails factory bean class org.codehaus.groovy.grails.orm.hibernate.ConfigurableLocalSessionFactoryBean also extends LocalSessionFactoryBean so if you configure your application to use AnnotationSessionFactoryBean you’ll lose a lot of important functionality from ConfigurableLocalSessionFactoryBean. So here’s a subclass of ConfigurableLocalSessionFactoryBean that borrows the useful annotation support from AnnotationSessionFactoryBean and can be used in a Grails application:

package com.burtbeckwith.grails.jpa;

import java.io.IOException;

import javax.persistence.Embeddable;
import javax.persistence.Entity;
import javax.persistence.MappedSuperclass;

import org.codehaus.groovy.grails.orm.hibernate.ConfigurableLocalSessionFactoryBean;
import org.codehaus.groovy.grails.orm.hibernate.cfg.GrailsAnnotationConfiguration;
import org.hibernate.HibernateException;
import org.hibernate.MappingException;
import org.hibernate.cfg.Configuration;
import org.springframework.context.ResourceLoaderAware;
import org.springframework.core.io.Resource;
import org.springframework.core.io.ResourceLoader;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.core.io.support.ResourcePatternResolver;
import org.springframework.core.io.support.ResourcePatternUtils;
import org.springframework.core.type.classreading.CachingMetadataReaderFactory;
import org.springframework.core.type.classreading.MetadataReader;
import org.springframework.core.type.classreading.MetadataReaderFactory;
import org.springframework.core.type.filter.AnnotationTypeFilter;
import org.springframework.core.type.filter.TypeFilter;
import org.springframework.util.ClassUtils;

/**
 * Based on org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean.
 * @author Burt Beckwith
 */
public class AnnotationConfigurableLocalSessionFactoryBean extends ConfigurableLocalSessionFactoryBean implements ResourceLoaderAware {

   private static final String RESOURCE_PATTERN = "/**/*.class";

   private Class<?>[] annotatedClasses;
   private String[] annotatedPackages;
   private String[] packagesToScan;

   private TypeFilter[] entityTypeFilters = new TypeFilter[] {
         new AnnotationTypeFilter(Entity.class, false),
         new AnnotationTypeFilter(Embeddable.class, false),
         new AnnotationTypeFilter(MappedSuperclass.class, false),
         new AnnotationTypeFilter(org.hibernate.annotations.Entity.class, false)};

   private ResourcePatternResolver resourcePatternResolver = new PathMatchingResourcePatternResolver();

   public AnnotationConfigurableLocalSessionFactoryBean() {
      setConfigurationClass(GrailsAnnotationConfiguration.class);
   }

   public void setAnnotatedClasses(Class<?>[] annotatedClasses) {
      this.annotatedClasses = annotatedClasses;
   }

   public void setAnnotatedPackages(String[] annotatedPackages) {
      this.annotatedPackages = annotatedPackages;
   }

   public void setPackagesToScan(String[] packagesToScan) {
      this.packagesToScan = packagesToScan;
   }

   public void setEntityTypeFilters(TypeFilter[] entityTypeFilters) {
      this.entityTypeFilters = entityTypeFilters;
   }

   public void setResourceLoader(ResourceLoader resourceLoader) {
      this.resourcePatternResolver = ResourcePatternUtils.getResourcePatternResolver(resourceLoader);
   }

   @Override
   protected void postProcessMappings(Configuration config) throws HibernateException {
      GrailsAnnotationConfiguration annConfig = (GrailsAnnotationConfiguration)config;
      if (annotatedClasses != null) {
         for (Class<?> annotatedClass : annotatedClasses) {
            annConfig.addAnnotatedClass(annotatedClass);
         }
      }
      if (annotatedPackages != null) {
         for (String annotatedPackage : annotatedPackages) {
            annConfig.addPackage(annotatedPackage);
         }
      }
      scanPackages(annConfig);
   }

   protected void scanPackages(GrailsAnnotationConfiguration config) {
      if (packagesToScan == null) {
         return;
      }

      try {
         for (String pkg : packagesToScan) {
            logger.debug("Scanning package '" + pkg + "'");
            String pattern = ResourcePatternResolver.CLASSPATH_ALL_URL_PREFIX +
                  ClassUtils.convertClassNameToResourcePath(pkg) + RESOURCE_PATTERN;
            Resource[] resources = resourcePatternResolver.getResources(pattern);
            MetadataReaderFactory readerFactory = new CachingMetadataReaderFactory(resourcePatternResolver);
            for (Resource resource : resources) {
               if (resource.isReadable()) {
                  MetadataReader reader = readerFactory.getMetadataReader(resource);
                  String className = reader.getClassMetadata().getClassName();
                  if (matchesFilter(reader, readerFactory)) {
                     config.addAnnotatedClass(resourcePatternResolver.getClassLoader().loadClass(className));
                     logger.debug("Adding annotated class '" + className + "'");
                  }
               }
            }
         }
      }
      catch (IOException ex) {
         throw new MappingException("Failed to scan classpath for unlisted classes", ex);
      }
      catch (ClassNotFoundException ex) {
         throw new MappingException("Failed to load annotated classes from classpath", ex);
      }
   }

   private boolean matchesFilter(MetadataReader reader, MetadataReaderFactory readerFactory) throws IOException {
      if (entityTypeFilters != null) {
         for (TypeFilter filter : entityTypeFilters) {
            if (filter.match(reader, readerFactory)) {
               return true;
            }
         }
      }
      return false;
   }
}

You can replace the Grails SessionFactory bean in your application’s grails-app/conf/spring/resources.groovy by using the same name as the one Grails registers:

import com.burtbeckwith.grails.jpa.AnnotationConfigurableLocalSessionFactoryBean

beans = {
   sessionFactory(AnnotationConfigurableLocalSessionFactoryBean) { bean ->
      bean.parent = 'abstractSessionFactoryBeanConfig'
      packagesToScan = ['com.mycompany.myapp.entity']
   }
}

Here I’ve listed one package name in the packagesToScan property but you can list as many as you want. You can also explicitly list classes with the annotatedClasses property. Note that this is for the “default” DataSource; if you’re using multiple datasources you will need to do this for each one.

So this means we can define this class in src/groovy/com/mycompany/myapp/entity/Person.groovy:

package com.mycompany.myapp.entity

import javax.persistence.Column
import javax.persistence.Entity
import javax.persistence.GeneratedValue
import javax.persistence.Id
import javax.persistence.Version

@Entity
class Person {

   @Id @GeneratedValue
   Long id

   @Version
   @Column(nullable=false)
   Long version

   @Column(name='first', nullable=false)
   String firstName

   @Column(name='last', nullable=false)
   String lastName

   @Column(nullable=true)
   String initial

   @Column(nullable=false, unique=true, length=200)
   String email
}

It will be detected as a domain class and if you run the schema-export script the table DDL will be there in target/ddl.sql.


There are a few issues to be aware of however, mostly around constraints. You can’t define a constraints or mapping block in the class – they will be ignored. The mappings that you would have added just need to go in the annotations. For example I have overridden the default names for the firstName and lastName properties in the example above. But nullable=true is the default for JPA and it’s the opposite in Grails – properties are required by default. So while the annotations will affect the database schema, Grails doesn’t use the constraints from the annotations and you will get a validation error for this class if you fail to provide a value for the initial property.

You can address this by creating a constraints file in src/java; see the docs for more details. So in this case I would create src/java/com/mycompany/myapp/entity/PersonConstraints.groovy with a non-static constraints property, e.g.

constraints = {
   initial(nullable: true)
   email unique: true, length: 200)
}

This way the Grails constraints and the database constraints are in sync; without this I would be able to create an instance of the domain class that has an email with more than 200 characters and it would validate, but cause a database constraint exception when inserting the row.

This also has the benefit of letting you use the Grails constraints that don’t correspond to a JPA constraint such as email and blank.

Logging Hibernate SQL

Thursday, October 18th, 2012

There are two well-known ways to log Hibernate SQL in Grails; one is to add logSql = true in DataSource.groovy (either in the top-level block for all environments or per-environment)

dataSource {
   dbCreate = ...
   url = ...
   ...
   logSql = true
}

and the other is to use a Log4j logging configuration:

log4j = {
   ...
   debug 'org.hibernate.SQL'
}

The problem with logSql is that it’s too simple – it just dumps the SQL to stdout and there is no option to see the values that are being set for the positional ? parameters. The logging approach is far more configurable since you can log to the console if you want but you can configure logging to a file, to a file just for these messages, or any destination of your choice by using an Appender.

But the logging approach is problematic too – by enabling a second Log4j category

log4j = {
   ...
   debug 'org.hibernate.SQL'
   trace 'org.hibernate.type'
}

we can see variable values, but you see them both for PreparedStatement sets and for ResultSet gets, and the gets can result in massive log files full of useless statements. This works because the “Type” classes that Hibernate uses to store and load Java class values to database columns (for example LongType, StringType, etc.) are in the org.hibernate.type package and extend (indirectly) org.hibernate.type.NullableType which does the logging in its nullSafeSet and nullSafeGet methods.

So if you have a GORM domain class

class Person {
   String name
}

and you save an instance

new Person(name: 'me').save()

you’ll see output like this:

DEBUG hibernate.SQL  - insert into person (id, version, name) values (null, ?, ?)
TRACE type.LongType  - binding '0' to parameter: 1
TRACE type.StringType  - binding 'me' to parameter: 2
DEBUG hibernate.SQL  - call identity()

When you later run a query to get one or more instances

def allPeople = Person.list()

you’ll see output like this

DEBUG hibernate.SQL  - select this_.id as id0_0_, this_.version as version0_0_, this_.name as name0_0_ from person this_
TRACE type.LongType  - returning '1' as column: id0_0_
TRACE type.LongType  - returning '0' as column: version0_0_
TRACE type.StringType  - returning 'me' as column: name0_0_

This isn’t bad for one instance but if there were multiple results then you’d have a block for each result containing a line for each column.

I was talking about this yesterday at my Hibernate talk at SpringOne 2GX and realized that it should be possible to create a custom Appender that inspects log statements for these classes and ignores the statements resulting from ResultSet gets. To my surprise it turns out that everything has changed in Grails 2.x because we upgraded from Hibernate 3.3 to 3.6 and this problem has already been addressed in Hibernate.

The output above is actually from a 1.3.9 project that I created after I got unexpected output in a 2.1.1 application. Here’s what I saw in 2.1.1:

DEBUG hibernate.SQL  - 
    /* insert Person
        */ insert 
        into
            person
            (id, version, name) 
        values
            (null, ?, ?)

TRACE sql.BasicBinder  - binding parameter [1] as [BIGINT] - 0

TRACE sql.BasicBinder  - binding parameter [2] as [VARCHAR] - asd

and

DEBUG hibernate.SQL  -
    /* load Author */ select
        author0_.id as id1_0_,
        author0_.version as version1_0_,
        author0_.name as name1_0_
    from
        author author0_
    where
        author0_.id=?

TRACE sql.BasicBinder  - binding parameter [1] as [BIGINT] - 1

TRACE sql.BasicExtractor  - found [0] as column [version1_0_]

TRACE sql.BasicExtractor  - found [asd] as column [name1_0_]

So now instead of doing all of the logging from the types’ base class, it’s been reworked to delegate to org.hibernate.type.descriptor.sql.BasicBinder and org.hibernate.type.descriptor.sql.BasicExtractor. This is great because now we can change the Log4j configuration to

log4j = {
   ...
   debug 'org.hibernate.SQL'
   trace 'org.hibernate.type.descriptor.sql.BasicBinder'
}

and have our cake and eat it too; the SQL is logged to a configurable Log4j destination and only the PreparedStatement sets are logged.

Note that the SQL looks different in the second examples not because of a change in Grails or Hibernate but because I always enable SQL formatting (with format_sql) and comments (with use_sql_comments) in test apps so when I do enable logging it ends up being more readable, and I forgot to do that for the 1.3 app:

hibernate {
   cache.use_second_level_cache = true
   cache.use_query_cache = false
   cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
   format_sql = true
   use_sql_comments = true
}

Updates for “Delayed SessionFactory Creation in Grails”

Wednesday, September 26th, 2012

Back in the beginning of 2010 I did a post on how to delay creating the SessionFactory based on a discussion on the User mailing list. This has come up again and I thought I’d look and see if things had changed for Grails 2.

The general problem is the same as it was; Grails and Hibernate create database connections during startup to help with configuration, so the information that is auto-discovered has to be explicitly specified. In addition any eager initialization that can wait should wait.

One such configuration item is the lobHandlerDetector bean. This hasn’t changed from before, so the approach involves specifying the bean yourself (and it’s different depending on whether you’re using Oracle or another database). Since it’s the same I won’t include the details here; see the previous post.

Another is the Dialect. Again, this is the same as before – just specify it in DataSource.groovy. This is a good idea in general since there might be particular features you need in a non-default Dialect class, and specifying org.hibernate.dialect.MySQL5InnoDBDialect for MySQL guarantees that you’ll be using transactional InnoDB tables instead of non-transactional MyISAM tables.

The remaining issues have to do with eager initialization. I started down the path of reworking how to lazily initialize the SessionFactory since using a Spring bean post-processor is significantly less involved (and brittle) than the approach I had previously used. But it turns out that the more recent version of Hibernate that we’re now using supports a flag that avoids database access during SessionFactory initialization, hibernate.temp.use_jdbc_metadata_defaults. So add this to the hibernate block in DataSource.groovy:

hibernate {
   ...
   temp.use_jdbc_metadata_defaults = false
}

And the last issue is the DataSource itself. Up to this point all of the changes will avoid getting a connection, but the pool might pre-create connections at startup. The default implementation in Grails is org.apache.commons.dbcp.BasicDataSource and its initial size is 0, so you’re ok if you haven’t configured a different implementation. If you have, be sure to set its initial size to 0 (this isn’t part of the DataSource so the setter method is implementation-specific if it even exists).


If you’re using multiple datasources, you can delay their database connectivity too. There is a lobHandlerDetector bean for each datasource, so for example if you have a second one with the name “ds2”, configure a lobHandlerDetector_ds2 bean in resources.groovy. Likewise for the Dialect; specify it in the dataSource_ds2 block in DataSource.groovy. Set the use_jdbc_metadata_defaults option in the hibernate_ds2 block:

hibernate_ds2 {
   ...
   temp.use_jdbc_metadata_defaults = false
}

And finally, as for the single-datasource case, if you’ve reconfigured secondary datasource beans’ pool implementations, set their initial sizes to 0.

Hibernate Bags in Grails 2.0

Monday, November 14th, 2011

When I’ve talked in the past about collection mapping in Grails (you can see a video of a SpringOne/2GX talk here) I mentioned that the current approach of using Sets or Lists is problematic and provided workarounds. I mentioned at the time that Hibernate has support for Bags which don’t enforce uniqueness or order like Sets and Lists do, so if GORM supported Bags we could just use those. So I added support for Bags to GORM for Grails 2.0 and thought that was that.

I thought it’d be interesting to demo this at my GORM talk at this year’s SpringOne/2GX but when I created a small test application it wasn’t working like I remembered. In fact it was actually worse than the problems I was working around. So I put that away with a mental note to get back to this soon, and before 2.0 final is released.

It turns out there’s good news and bad news. The good news is that it’s not completely broken. The bad news is that it’s mostly broken.


First the good news. If you have a one-to-many that doesn’t use a join table, using a Bag works mostly as expected. As an example, consider an Author/Book mapping where a book has one author, and an author can have many books:

class Author {
   String name
   Collection books
   static hasMany = [books: Book]
}
class Book {
   String title
   static belongsTo = [author: Author]
}

Using the Map syntax for the belongsTo mapping is the key to avoiding the join table and relating the tables with a foreign key from the book table to the author table. If you run grails schema-export the output will be something like

create table author (
   id bigint generated by default as identity,
   version bigint not null,
   name varchar(255) not null,
   primary key (id)
);

create table book (
   id bigint generated by default as identity,
   version bigint not null,
   author_id bigint not null,
   title varchar(255) not null,
   primary key (id)
);

alter table book add constraint FK2E3AE9CD85EDFA
foreign key (author_id) references author;

If you run this initializing code in a Grails console with SQL logging enabled (add logSql = true in DataSource.groovy)

def author = new Author(name: 'Hunter S. Thompson')
author.addToBooks(title: 'Fear and Loathing in Las Vegas')
author.save()

you’ll see output like this:

insert into author (id, version, name) values (null, ?, ?)

insert into book (id, version, author_id, title) values (null, ?, ?, ?)

update author set version=?, name=? where id=? and version=?

which is ok; it inserts the author and the book, although it bumps the version of the Author. I’ll come back to that.

If you run this updating code:

def author = Author.get(1)
author.addToBooks(title: "Hell's Angels: A Strange and Terrible Saga")
author.save()

you’ll see output like this:

select author0_.id as id0_0_, author0_.version as version0_0_,
author0_.name as name0_0_ from author author0_ where author0_.id=?

insert into book (id, version, author_id, title) values (null, ?, ?, ?)

update author set version=?, name=? where id=? and version=?

This is also basically ok – it loads the author, inserts the book, and versions the author.

If you map the belongsTo with the non-map syntax (static belongsTo = Author) you’ll get this DDL:

create table author (
   id bigint generated by default as identity,
   version bigint not null,
   name varchar(255) not null,
   primary key (id)
);

create table author_book (
   author_books_id bigint,
   book_id bigint
);

create table book (
   id bigint generated by default as identity,
   version bigint not null,
   title varchar(255) not null,
   primary key (id)
);

alter table author_book add constraint FK2A7A111D3FA913A
foreign key (book_id) references book;

alter table author_book add constraint FK2A7A111DC46A00AF
foreign key (author_books_id) references author;

and running the initializing code above will result in output that’s similar to before, with the addition of inserting into the join table:

insert into author (id, version, name) values (null, ?, ?)

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

insert into author_book (author_books_id, book_id) values (?, ?)

but running the updating code results in this:

select author0_.id as id4_0_, author0_.version as version4_0_,
author0_.name as name4_0_ from author author0_ where author0_.id=?

select books0_.author_books_id as author1_4_0_, books0_.book_id as
book2_0_ from author_book books0_ where books0_.author_books_id=?

select book0_.id as id3_0_, book0_.version as version3_0_,
book0_.title as title3_0_ from book book0_ where book0_.id=?

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

delete from author_book where author_books_id=?

insert into author_book (author_books_id, book_id) values (?, ?)

insert into author_book (author_books_id, book_id) values (?, ?)

This is not good. It reads the author, then all of the books for that author (the part we’re trying to avoid), inserts the book, and then deletes every row from the join table for this author, and re-inserts rows for each element in the Bag. Ouch.


If you convert the relationship to a many-to-many with Bags on both sides:

class Author {
   String name
   Collection books
   static hasMany = [books: Book]
}
class Book {
   String title
   Collection authors
   static hasMany = [authors: Author]
   static belongsTo = Author
}

and run this initializing code:

def author = new Author(name: 'Hunter S. Thompson')
author.addToBooks(title: 'Fear and Loathing in Las Vegas')
author.save()

you get this output:

insert into author (id, version, name) values (null, ?, ?)

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

update book set version=?, title=? where id=? and version=?

insert into author_books (author_id, book_id) values (?, ?)

It inserts the author and the book, then versions both rows, and inserts a row into the join table.

If you run this updating code:

def author = Author.get(1)
author.addToBooks(title: "Hell's Angels: A Strange and Terrible Saga")
author.save()

then the output is similar to the output for one-to-many with a join table:

select author0_.id as id0_0_, author0_.version as version0_0_,
author0_.name as name0_0_ from author author0_ where author0_.id=?

select books0_.author_id as author1_0_0_, books0_.book_id as book2_0_
from author_books books0_ where books0_.author_id=?

insert into book (id, version, title) values (null, ?, ?)

update author set version=?, name=? where id=? and version=?

update book set version=?, title=? where id=? and version=?

delete from author_books where author_id=?

insert into author_books (author_id, book_id) values (?, ?)

insert into author_books (author_id, book_id) values (?, ?)

It loads the author, then all of the book ids from the join table (to create proxies, which are lighter-weight than full domain class instances but there will still be N of them in memory), then inserts the new book, versions both rows, and again deletes every row from the join table and reinserts them. Ouch again.


So for the two cases where there are join tables, we have a problem. Hibernate doesn’t worry about duplicates or order in-memory, but the join tables can’t have duplicate records, so it has to pessimistically clear the data and reinsert it. This has all of the negatives of the non-Bag approach and adds another big one.

Even in the first case I described where there’s no join table, there’s still a problem. Since the Author’s version gets incremented when you add a Book (you’re editing a property of the Author, so it’s considered to be updated even though it’s a collection pointing to another table) there’s a high risk that concurrently adding child instances will cause optimistic locking exceptions for the Author, even though you just want to insert rows into the book table. And this is the case for all three scenarios.


So I guess I’m back to advocating the approach from my earlier talks; don’t map a collection of Books in the Author class, but add an Author field to the Book class instead:

class Author {
   String name
}
class Book {
   String title
   Author author
}

And for many-to-many case map the “author_books” table with a domain class:

class Author {
   String name
}
class Book {
   String title
}
class AuthorBook {
   Author author
   Book book
   ...
}

Customizing GORM with a Configuration Subclass

Tuesday, December 28th, 2010

GORM mappings let you configure pretty much anything you need in your Grails applications, but occasionally there are more obscure tweaks that aren’t directly supported, and in this case a custom Configuration class is often the solution.

By default Grails uses an instance of GrailsAnnotationConfiguration and the standard approach is to subclass that to retain its functionality and override the secondPassCompile() method.

As an example, let’s look at what is required to specify the foreign key name between two related domain classes. This is inspired by this mailing list question but is also a personal pet peeve since I always name foreign keys in traditional Hibernate apps (using annotations or hbm.xml files). FK_USER_COUNTRY is a lot more useful than FK183C3385A9B72.

One restriction is that you need to code the class in Java – a Groovy class won’t compile due to method visibility issues.

Create this class (with an appropriate name and package for your application) in src/java:

package com.yourcompany.yourapp;

import java.util.Collection;
import java.util.Iterator;

import org.codehaus.groovy.grails.orm.hibernate.cfg.GrailsAnnotationConfiguration;
import org.hibernate.MappingException;
import org.hibernate.mapping.ForeignKey;
import org.hibernate.mapping.PersistentClass;
import org.hibernate.mapping.RootClass;

public class MyConfiguration extends GrailsAnnotationConfiguration {

   private static final long serialVersionUID = 1;

   private boolean _alreadyProcessed;

   @SuppressWarnings({"unchecked", "rawtypes"})
   @Override
   protected void secondPassCompile() throws MappingException {
      super.secondPassCompile();

      if (_alreadyProcessed) {
         return;
      }

      for (PersistentClass pc : (Collection<PersistentClass>)classes.values()) {
         if (pc instanceof RootClass) {
            RootClass root = (RootClass)pc;
            if ("com.yourcompany.yourapp.User".equals(root.getClassName())) {
               for (Iterator iter = root.getTable().getForeignKeyIterator();
                       iter.hasNext();) {
                  ForeignKey fk = (ForeignKey)iter.next();
                  fk.setName("FK_USER_COUNTRY");
               }
            }
         }
      }

      _alreadyProcessed = true;
   }
}

This is a very simplistic example and everything is hard-coded. A real example would check that the foreign key exists, that it’s the correct one, etc., or might be more sophisticated and automatically rename all foreign keys using the FK_ prefix and using the table names of the two related tables.

This won’t be automatically used, but you just need to set the configClass property in grails-app/conf/DataSource.groovy:

dataSource {
   pooled = true
   driverClassName = '...'
   username = '...'
   password = '...'
   configClass = 'com.yourcompany.yourapp.MyConfiguration'
}

For other examples of using this approach, see these posts in the Nabble archive:

Grails Database Reverse Engineering Plugin

Tuesday, November 09th, 2010

Support for database migrations and reverse engineering are two related features that we’ve scheduled for Grails 1.4/2.0 (see the roadmap wiki page for the others). The migration support will be based on Liquibase and there’s already a plugin for that so I started looking at reverse engineering first.

Work progressed faster than I expected (thanks to the features of the Hibernate Tools library and all of the time I spent digging into its internals for the App Info plugin) and it didn’t depend on any new features in 1.4 (not yet anyway) so I released the plugin yesterday so users can started using it now. Install it the usual way:

grails install-plugin db-reverse-engineer

and refer to the documentation for configuration options.

I tested this with MySQL and Oracle, and other databases that Hibernate supports should work too. There’s a tutorial in the documentation that uses MySQL, and you can use the Chinook database to test with Oracle. I used these settings (in grails-app/conf/Config.groovy) for the Chinook database:

grails.plugin.reveng.packageName = 'com.codeplex.chinookdatabase'
grails.plugin.reveng.defaultSchema = 'CHINOOK'
grails.plugin.reveng.manyToManyBelongsTos = [PLAYLISTTRACK: 'PLAYLIST']

and these datasource settings (in grails-app/conf/DataSource.groovy)

dataSource {
   url = 'jdbc:oracle:thin:@localhost:1521:orcl'
   driverClassName = 'oracle.jdbc.driver.OracleDriver'
   username = 'chinook'
   password = 'p4ssw0rd'
   dialect = org.hibernate.dialect.Oracle10gDialect
}

Try it out and report any issues on the Grails user mailing list or in JIRA.


One related thing I wanted to point out is that the work to replace HSQLDB with H2 is mostly complete (JIRA issue here). I’m a big fan of H2 and one of its coolest features is its embedded web-based console (which works with any database that has a JDBC driver). This is now enabled by default in the development environment and can be enabled in other environments. Accessing data in your development database will be very convenient in 1.4 – just open http://localhost:8080/appname/dbconsole in a browser (JIRA issue here).

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.