Wednesday, February 19, 2014

Getting a Spec to run with Jasmine in the browser

This simple issue vexed me for an hour, I thought I'd save you all some time.

"I loaded the Jasmine SpecRunner.html file but the report says 0 specs run"  What gives?

Well, in my case, running with Chrome, it turned out to be simple....I had accidentally create my javascript 'script' tag as an empty tag, and that's all it was.  This was annoying in that no console error messages appeared, I just had to eyeball it.

So this bit

    <!-- include source files here... -->
  <script type="text/javascript" src="../../src/common/virtual-collections/virtualCollectionsService.js"></script>

  <!-- include spec files here... -->
  <script type="text/javascript" src="common/virtual-collections/virtualCollectionsServiceSpec.js"/>

Just had to become this bit...and wa-la!

 <!-- include source files here... -->
  <script type="text/javascript" src="../../src/common/virtual-collections/virtualCollectionsService.js"></script>

  <!-- include spec files here... -->
  <script type="text/javascript" src="common/virtual-collections/virtualCollectionsServiceSpec.js"></script>

The small things always kill you.  Jasmine was right, there were no specs.

Thursday, September 12, 2013

Basic intro - getting and building Jargon

The latest in our little series of video chats is an overview of logging into GForge at RENCI, and getting Jargon from git, building it with Maven.

Monday, June 17, 2013

Jargon and RestEasy - some notes on what I've run into

I'm starting to work on a formal REST API for iRODS.  This is coming from multiple projects, but this first one gives me a chance to build the skeleton and set down some practices for later.  The project itself is here on the RENCI GForge.

For several reasons, I decided to roll with JBoss RestEasy.  Not least of which is their compliance with JAX-RS, which goes some way towards future-proofing any work I do.  There is also a need to do some S/MIME encryption of messages, and it looks like RestEasy handles this well enough.

RestEasy is not without its headaches and frustrations.  A good deal of this frustration has to do with integrating Spring beans into the mix, which I use a lot in Jargon.  The docs don't seem to reflect actual usage in this area, both for service development, and for testing.  In the Spring Integration section of the RestEasy docs, you get this example:

   <display-name>Archetype Created Web Application</display-name>




For your web.xml, and:

<beans xmlns=""

    <!-- Import basic SpringMVC Resteasy integration -->
    <import resource="classpath:springmvc-resteasy.xml"/>

For the Spring configuration file.

It doesn't work, it doesn't load your beans...

I dug around a lot (and sorry, I cannot retrace my steps and refer you to some of the info I found!), and by combining several proposed solutions I found that this worked...

First, for the web.xml document:








The things to highlight here include the fact that I had to comment out the RestEasy component scan context parameter, add the contextConfigLocation parameter, and wire in the Spring RestEasy integration components by hand.  In this configuration, it does load my custom beans, and then it loads my RestEasy services by the fact that I added direct Spring configuration for that component scan:


OK, so that seems to be OK now. I have a service running, it's doing content negotiation, it's wired in my Jargon, how do I test it? Jargon has a lot of tests, I don't need to test Jargon or iRODS, so I decided to test at the http request level. Given this, I was not too excited at testing with mocks. Mocks seem like a lot of trouble, and might mask some of the subtleties involved, given that it's pretty easy (I thought) to test all of this with an embedded servlet container. This seems ideal...test end-to-end as the user sees things, create sample code at the same time. What could be better!

The JBoss docs don't go into great detail about best practices for testing RestEasy apps, but clearly the TJWS embedded container seemed obvious.  They provide a bit of pseudo-code in the docs, and, unfortunately, it does not work:

   public static void main(String[] args) throws Exception 
      final TJWSEmbeddedJaxrsServer tjws = new TJWSEmbeddedJaxrsServer();

      org.jboss.resteasy.plugins.server.servlet.SpringBeanProcessor processor = new SpringBeanProcessor(tjws.getDeployment().getRegistry(), tjws.getDeployment().getFactory();
      ConfigurableBeanFactory factory = new XmlBeanFactory(...);

At least we see a bit that looks like we can adapt to the setup() method of a JUnit test case.  Given that clue, I found some very helpful posts, such as this one from 'eugene' (thanks eugene!).  But even this did not work, as ApplicationContext was not @Autowired.  I kept getting NPEs.

This got me very close, and I've used SpringJUnit4ClassRunner extensively for Hibernate/JPA based applications in the iDrop suite, so I felt like I just needed to hack on that a bit and I could get there.  The missing piece came from 'Daff' (thanks Daff!) who pointed out the ability to have your JUnit test case extend ApplicationContextAware in his post.

I tried to wire this into the @BeforeClass annotated startup() method with a static ApplicationContext variable.  Needless to say, that did not work, and was always 'null'.  It ended up that I had to place that server startup code in the @Before annotated method, which runs on instance variables, and the context was then available.  That's a little bit hinky, but given that I have a very short window for this project, I rolled with a solution there that saves the ApplicationContext in a static variable, and checks to see (singleton-like) if an instance has been created yet for the JUnit class.  This is working fine so far, and only smells a tiny bit.  I may revisit it, but I'm happy enough to get a base testing strategy defined.

So, here's a JUnit test that works with Spring configured beans:


import junit.framework.Assert;

import org.jboss.resteasy.client.ClientRequest;
import org.jboss.resteasy.client.ClientResponse;
import org.jboss.resteasy.core.Dispatcher;
import org.jboss.resteasy.plugins.server.tjws.TJWSEmbeddedJaxrsServer;
import org.jboss.resteasy.plugins.spring.SpringBeanProcessor;
import org.jboss.resteasy.plugins.spring.SpringResourceFactory;
import org.jboss.resteasy.spi.ResteasyDeployment;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.TestExecutionListeners;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@ContextConfiguration(locations = { "classpath:jargon-beans.xml",
  "classpath:rest-servlet.xml" })
@TestExecutionListeners({ DependencyInjectionTestExecutionListener.class,
  DirtiesContextTestExecutionListener.class })
public class UserServiceTest implements ApplicationContextAware {

 private static TJWSEmbeddedJaxrsServer server;

 private static ApplicationContext applicationContext;

 public static void setUpBeforeClass() throws Exception {


 public static void tearDownAfterClass() throws Exception {
  if (server != null) {

 public void setUp() throws Exception {
  if (server != null) {

  server = new TJWSEmbeddedJaxrsServer();
  ResteasyDeployment deployment = server.getDeployment();
  Dispatcher dispatcher = deployment.getDispatcher();
  SpringBeanProcessor processor = new SpringBeanProcessor(dispatcher,
    deployment.getRegistry(), deployment.getProviderFactory());
  ((ConfigurableApplicationContext) applicationContext)

  SpringResourceFactory noDefaults = new SpringResourceFactory(
    "userService", applicationContext, UserService.class);


 public void tearDown() throws Exception {

 public void testGetUserJSON() throws Exception {

  final ClientRequest clientCreateRequest = new ClientRequest(

  final ClientResponse clientCreateResponse = clientCreateRequest
  Assert.assertEquals(200, clientCreateResponse.getStatus());
  String entity = clientCreateResponse.getEntity();


 public void setApplicationContext(final ApplicationContext context)
   throws BeansException {
  applicationContext = context;


So it might not be 'ideal', but I can move on and get this thing done.  I'd appreciate any pointers or refinements, and hopefully this will at least get you running and save you similar headaches.

Thursday, April 25, 2013

Demo of SPARQL search in HIVE/iRODS Integration

This demo video was prepared for a presentation this week, and it shows a bit more of the integration between iRODS and

As mentioned previously, we're integrating controlled vocabularies via SKOS using the HIVE system. Dr. Jane Greenberg as SILS has prepared a short paper describing some of the concepts and motivations for this effort here.

Technically, we have three primary elements:

  1. Integration of the HIVE system into our iDrop web interface.  This includes a new set of Jargon libraries that support this integration, allowing easy wiring of HIVE functionality via Spring.
  2. A 'visitor' and 'iterator' library in Jargon for sweeping through data objects and collections marked up with HIVE RDF terms.
  3. An OWL vocabulary (though it's a rough sketch right now) describing iRODS ICAT metadata and relationships, which also goes into the index with our vocabularies.
  4. A HIVE query REST interface that can issue SPARQL queries to our indexed triple store, and a start at some preset queries such as searching on a term, or searching for items related to a term.
These elements are demonstrated in the demo video below...

Saturday, March 23, 2013

SPARQL queries for iRODS Data

This is cool:

PREFIX irods:       <> 
PREFIX skos:    <>
SELECT ?x ?y

?x  irods:correspondingConcept ?y .
?y skos:related <>

That's a SPARQL query running on Jena Fuseki...and this is related to the work we're doing with HIVE integration, as discussed in this previous blog entry...SPARQL is a query langage that can be used to search semantic metadata, in our case, metadata that describes the iRODS catalog, SKOS controlled vocabularies, and 'serialized' RDF statements saved as iRODS AVUs that apply controlled vocabulary terms to iRODS files and collections.  This improves the normal iRODS AVUs by giving them structure and meaning, via SKOS.

In the case above, we have a term defined in the Agrovoc vocabulary which looks something like this snippet, as 'turtle'.

      a       skos:Concept ;
      skos:narrower <> , <> , <> , <> , <> , <> , <> , <> , <> ;
      skos:prefLabel "Climatic zones"@en ;
      skos:related <> , <> , <> ;
      skos:scopeNote "Use for areas having identical climates; for the physical phenomenon use Climate (1665)"@en .

Note that SKOS will define broader, narrower, and related terms, along with other data.  This means that a user may tag an iRODS file or collection with a term like c_1669, and search for it on the related term c:6963.  

That's what the SPARQL query above shows, you are looking for any iRODS files or collections that have an AVU with a SKOS vocabulary term from Agrovoc that is related to a given concept.  The result of this query, in JSON, looks like so:

        "x": { "type": "uri" , "value": "irods://localhost:1247/test1/trash/home/test1/jargon-scratch.1256888938/JenaHiveIndexerServiceImplWithDerbyTest/testExecuteOnt/subdirectory2/hivefile7" } ,
        "y": { "type": "uri" , "value": "" }
      } ,
        "x": { "type": "uri" , "value": "irods://localhost:1247/test1/trash/home/test1/jargon-scratch.1256888938/JenaHiveIndexerServiceImplWithDerbyTest/testExecuteOnt/subdirectory1/hivefile7" } ,
        "y": { "type": "uri" , "value": "" }
      } ,
        "x": { "type": "uri" , "value": "irods://localhost:1247/test1/trash/home/test1/jargon-scratch.705362199/JenaHiveIndexerServiceImplWithOntTest/testExecuteOnt/subdirectory1/hivefile6" } ,
        "y": { "type": "uri" , "value": "" }
      } ,

As you can see (or at least trust me on this), you are finding iRODS data based on a related concept.  With Fuseki, we could add such SPARQL queries in short order to the iDrop apps, or even to iCommands.  Note that we've done this by marking up iRODS data with SKOS terms, storing these as special AVUs, indexing them with a spider, and then putting them into a Jena triple store for SPARQL queries.  The same sorts of things can also be pretty easily done using Lucene for text search, and adding these new methods of finding data is going to be an interesting area for Jargon and iRODS development.  You can see some of the HIVE work in the GForge project at DICE and RENCI here!

Tuesday, March 12, 2013

Some work in progress integrating HIVE with iRODS

iRODS has a powerful facility, through the iCAT master catalog, to manage user-supplied metadata on different parts of the catalog domain, such as files and collections.  These are 'AVU' triples, which are just attribute-value-unit slots that can hold free-format data.

We're using AVUs by adding conventions and metaphors on top of them, such as free tags, starred folders, and shares, such as in this previous video demo.  One weakness of AVUs is that they are totally unstructured.  This does not mean that we cannot apply structure at a higher level, and that's exactly what the interest in HIVE integration is about.

HIVE is an acronym for Helping Interdisciplinary Vocabulary Engineering, and HIVE is a project from the Metadata Research Center at the School of Information and Library Science at UNC Chapel Hill,.  (Did I mention we were just voted the #2 best program in the country by US News and World Report?)

HIVE is a tool that allows browsing and searching across controlled vocabularies defined in SKOS, a simple RDF schema for defining dictionaries, thesauri, and other structured metadata.  A key aspect is the integration of RDF with Lucene to allow searching across selected vocabularies, a helpful approach since much of the focus of iRODS and DICE is in multi-disciplinary research collaboration, as in the Datanet Federation Consortium.  HIVE solves a lot of problems we were facing, so it is a happy circumstance that the MRC is just around the corner from us, and we're busy looking at integration.

In a nutshell, HIVE allows us to:

  • Keep multiple controlled vocabularies
  • Allow users to easily search and navigate across vocabularies to find appropriate terms
  • Make AVU metadata meaningful by providing structure and consistency
  • Power rich metadata queries using tools such as SPARQL to find iRODS files and collections

A short video demo follows that shows the first level of integration between iDrop (the iRODS cloud browser) and HIVE.  We've added a HIVE tab to contain a concept browser, allowing markup of iRODS files and collections with controlled vocabulary terms.

Note that we've yet to add search across vocabularies and automatic keyword extraction with MAUI and KEA.  These are available in HIVE, and we intend on adding them in this project.  

The next step is to build the capability to extract iRODS data and vocabulary terms and populate a triple store (Sesame or Jena), allowing queries on the triple-store, and allowing processing of results such that users can access the referenced data in iRODS.  We're seeking a generalized approach so that we can have a standard practice to store RDF statements about iRODS data, and we can index and manage real-time updates.  This aspect is next for the project, and should have a wide application for iRODS users!

Thursday, February 7, 2013

Packing Instructions

Folks often ask what Jargon actually does, and I usually say it's like a JDBC driver underneath a high-level object library. The JDBC driver part refers to the fact that iRODS has a wire-level protocol that communicates commands and data between client and server (and this same protocol works server-to-server, it's a grid!). Anyhow, inside Jargon, there is an org.irods.jargon.core.packinstr package that models iRODS packing instructions.

This low-level protocol handling is meant to be 'under the covers' so you never need to worry about it.  This is especially important because the protocols can change, and we might see future upgrade to use something like protobuf added.

At any rate, when developing Jargon implementations of the iRODS protocol, the actual procedure is to mine the C code, and hack at it until it works, via the creation of lots of unit tests.  Fancy...

In this endeavor, it's often helpful to see the actual protocol interactions of various icommands, and here's how you can do this too...

Simple open a shell, and export these variables:

export irodsProt=1; export irodsLogLevel=9;

Now as you execute your icommands, you'll be able to peek at the protocol operations going back and forth.  Be glad you don't really have to look at that, maybe you'll like Jargon now!