Discussions

News: Annotations and Validation

  1. Annotations and Validation (40 messages)

    Gavin King has written a blog entry ("Annotations and Validation") in which he says that validation is "the worst limitation of JSR-175," the annotations JSR. He says that the other limitations of annotations have been documented (annotation member values can't be null, and annotations don't use inheritance) but the inability to validate annotations' proper usage is the biggest problem, among others.
    The worst limitation of JSR-175, as it stands today, is the incredible paucity of facilities for validating annotated classes. Surprisingly, I have not seen this discussed elsewhere. The only facility provided for constraining annotations is the @Target meta-annotation, which specifies what kind of program element (class, method, field, etc) may be annotated. Compared with the functionality provided by DTDs or XML schemas, this is amazingly primitive. And, unlike the previous limitations we mentioned, this problem burdens the end user of an annotation-based framework rather than the framework designer.
    Gavin also mentions a problem with the @Resource annotation, which he says is too general:
    Actually, the lack of a proper constraint language in the current release of Java is already starting to lead some people down the wrong path. The Java EE 5 draft uses an annotation called @Resource for dependency injection of all kinds of diverse things, many of which are not "resources" in the usual sense of the word. Some "resources" require extra information such as authenticationType or mappedName, information which is not even meaningful for other types of "resources". So the @Resource annotation is turning into a bag of unrelated stuff, most of which is irrelevant to any given type of "resource". This is a construct with extremely weak semantics, and extremely low cohesion. It gets more complex, and less cohesive, each time we discover a new kind of "resource". It's the annotation equivalent of a class called Resource with methods like sendJmsMessage(), executeSqlQuery() and listInbox().
    Annotations are proving to be a popular facility for JSE5 users - not least because new APIs such as EJB3 and Hibernate (and some web frameworks as well) use it and leverage it. However, Gavin's statements certainly have merit, and as people get used to annotations in their current state, improvement may be difficult to achieve. What do you think?

    Threaded Messages (40)

  2. Not so fast[ Go to top ]

    When first version of XML specification was released the only way to validate XML documents was to use DTDs, so the majority of complex documents was left not validated. After wider adoption of XML more ussable standards like XML Schema was created.

    Now majority of Java developer haven't ever used generic, because most of Java project haven't yet migrated to JDK5. But when more developers and more libraries will use annotations, I think most major IDE vendors will provide way to integrate annotation checking into their products. Now it is possible to use annotation processing tool which is included in JDK.
  3. Not so fast[ Go to top ]

    When first version of XML specification was released the only way to validate XML documents was to use DTDs

    Right, but DTDs offer way more functionality than @Target.
    But when more developers and more libraries will use annotations, I think most major IDE vendors will provide way to integrate annotation checking into their products. Now it is possible to use annotation processing tool which is included in JDK.

    While I agree that you can implement a schema constraint language and validator yourself using APC, it's not going to help you much if the annotation-based libraries you are using do not come with a constraints built-in.

    The big problem here is that other JSRs like 220 and 250 are not able to define constraints or make use of validation because there is no JSR for a standard constraint language.

    ie. We can't define @PostConstruct like this in 250:

    @Target(METHOD)
    @Appears(ONCE_ONLY)
    @Signature(returnType=void.class, argTypes={})
    @Retention(RUNTIME)
    @Documented
    public @interface PostConstruct {}


    Or whatever....

    Note that this also makes the specs much harder to write, since you have to use English for this stuff ;-(
  4. Not so fast[ Go to top ]

    Or whatever....Note that this also makes the specs much harder to write, since you have to use English for this stuff ;-(
    Yes, I wonder if "annotating annotations" is the right way to solve this problem. Looks like we need the full expressivity of a language to express these constraints... Java? BeanShell?

    --
    Cedric
  5. Not so fast[ Go to top ]

    Or whatever....Note that this also makes the specs much harder to write, since you have to use English for this stuff ;-(
    Yes, I wonder if "annotating annotations" is the right way to solve this problem. Looks like we need the full expressivity of a language to express these constraints... Java? BeanShell?

    I agree. You should just write compile-time validators using apt. Sounds like a good candidate for an Open Source framework.
  6. Not so fast[ Go to top ]

    I agree. You should just write compile-time validators using apt. Sounds like a good candidate for an Open Source framework.

    Bob, the EJB spec can't reference an "Open Source framework" ;-)
  7. Not so fast[ Go to top ]

    I agree. You should just write compile-time validators using apt. Sounds like a good candidate for an Open Source framework.

    Not so much a framework, more of an Ant task.
  8. XML Schema[ Go to top ]

    But it is possible to express almost any language in XML Schema which is implemented in XML. I think constraint language for annotations should be as declarative as possible so compiler/IDE has a lot of options how to implement it.
  9. Not so fast[ Go to top ]

    Or whatever....Note that this also makes the specs much harder to write, since you have to use English for this stuff ;-(
    Yes, I wonder if "annotating annotations" is the right way to solve this problem. Looks like we need the full expressivity of a language to express these constraints... Java? BeanShell?-- Cedric

    RELAX NG + Schematron + AspectJ pointcut expressions for methods? :-)
  10. Not so fast[ Go to top ]

    @Appears(ONCE_ONLY)

    As far as I know, JSR-175 does not allow any attribute to appear more then once at any target. So, in order to implement one to many relation you have to explicitly define anotation that contain an array of child annotations.
  11. Not so fast[ Go to top ]

    @Appears(ONCE_ONLY)
    As far as I know, JSR-175 does not allow any attribute to appear more then once at any target. So, in order to implement one to many relation you have to explicitly define anotation that contain an array of child annotations.

    Of course.

    What this would do is stop you trying to annotate two different methods of the same class as post-construct callbacks.

    By the way, it should be @Occurs, of course.
  12. Not so fast[ Go to top ]

    As far as I know, JSR-175 does not allow any attribute to appear more then once at any target. So, in order to implement one to many relation you have to explicitly define anotation that contain an array of child annotations.
    Of course.What this would do is stop you trying to annotate two different methods of the same class as post-construct callbacks.By the way, it should be @Occurs, of course.

    What is wrong with two post-construct callbacks? It just has to be supported by the container. :-)
  13. Not so fast[ Go to top ]

    As far as I know, JSR-175 does not allow any attribute to appear more then once at any target. So, in order to implement one to many relation you have to explicitly define anotation that contain an array of child annotations.
    Of course.What this would do is stop you trying to annotate two different methods of the same class as post-construct callbacks.By the way, it should be @Occurs, of course.
    What is wrong with two post-construct callbacks? It just has to be supported by the container. :-)

    Two post-construct callbacks are disallowed by the spec. This decision was made to avoid nondeterministic behavior that could occur due to ordering being undefined.
  14. XML; Not so fast[ Go to top ]

    Right, but DTDs offer way more functionality than @Target.
    Since an annotation can hold a string, it can also hold an XML document, and that would allow thorough validation of an annotation. Ideally XML would be a first class type for annotations, especially since inline XML is upcoming in a future release of Java.
  15. I agree[ Go to top ]

    I agree. I think should be a facility within the annotation code for validation, perhaps that validation facility could be a validation checking method (maybe the validate(AnnotatedElement annotatedElement) throws ValidationException would do ?), implemented direct on the annotation source code.

    This method could be invoked on the compilation phase to provide static checking of the annotation use, providing meaningfull error messages to the developer.

    Another possible improvement is on the JavaDoc side of things. Maybe we need something like AnnotationLets, to provide extra documentation based on annotations from a class or any other annotated element.
  16. Problem[ Go to top ]

    There is problem in this approach. Only very simple annotations can be checked in this way. Result of validation may depend on annotations on other classes and other classes's annotatation may depend on first. So we have circular dependency and this simplified validator can't validate this.

    For example in IDEA 5.0 null/not null types are implemented with annotations and it is impossible to check them without checking the source as a whole.
  17. The circular dependency isn´t a problem, the domain model (Java Reflection API) would already be full builded at the annotation validation phase, so you could iterate over it doing all the checking you need. This circular dependency on the checking phase would now be explicity on code, making it documented.

    I´m not full aware of these IDEA checking semantics, so I will assume that it will point the possibility of a null reference being passed as a method´s parameter. If it´s so, you´re right. These kind of validation will require data flow analysis at bytecode level. This level of info isn´t avaliable via Reflection API. But if it was, it could validated by that entry point also.
  18. Annotations and Validation[ Go to top ]

    Null parameter values would be VEEERY usefull.
    (true != false != null)
    The possibility to create parameters of any type too.
  19. ignorant ??[ Go to top ]

    Gavin, can't you just use an APT processor that validates any semantic rules you want? Is that ignorance or omission?

    In Java 6 the whole Java AST should be exposed as well so you could do semantic checks based on anything you like, not just annotation (hence it does not belong to the JSR 175)

    Pretty soon, this will also be integrated in Eclipse JDT ie incremental compilation, error reporting with line info and caracter info, and so on.

    It should then be fairly easy to come up with a generic APT processor whose rules are coming from some external representation to not rewrite (and harcode) pure semantic rules all the time - using some pointcuts like syntax and predicates.

    I am sure you'll find some links to read more on APT if I am too lazy to post them here.
    Just checkout Apache Beehive, I bet there are some semantic checks. APT is not just for code/artifact generation.
  20. ignorant ??[ Go to top ]

    Gavin, can't you just use an APT processor that validates any semantic rules you want?

    Without a standard, you'd have to persuade every user to install your apt processor (one per vendor?)

    Current limitations:
    apt is not a standard until java6
    apt requires a separate pass of the source before java6
  21. ignorant ??[ Go to top ]

    apt requires a separate pass of the source before java6

    This is the problem with APT right now IMO, depend on the developer to perform another step on the compilation process to do constraint violation checking is look for trouble IMO ...
  22. Using apt[ Go to top ]

    In Tiger you use apt to run processors against your annotations AND it will then compile the source code (and any that is generated by the processors), so in tiger, it is a different step (apt c.f. javac) rather than an additional step.

    In the mustang builds (current) the JSR-269 (apt's standardized successor) reference implementation is built into javac, so its just a matter of an additional command line argument (or two) to javac.

    You will not need a different validator for each vendor, the validator (actually an apt or JSR-269 processor) can be shipped in the same jar as the annotation itself making it easy for the tool (apt, javac, your favorite IDE etc) to find the validator that goes with the annotation.

    I see a great future for this sort of mechanism, far beyond what is being discussed here. For example I have a framework (Compile Time Assertions) for implementing validation code to check (in one case) that implementations of a particular interface have a public no args constructor. (By particular interface, I mean any annotated with @HasPublicNoArgsConstructor).

    This is just one example. There are many constraints currently expressed in doc comments that aren't checked at compile time (because they are extra-linguistic, not part of the JLS), but which can be checked at compile time when we have a way to run our own verification code in the compiler. JSR-269 (and apt) provide that mechanism.
  23. Apt on Javac[ Go to top ]

    Good to hear that the apt is being build into javac like it's been done on mustang. This will make IDE/ANT integration easer IMO.
  24. ignorant ??[ Go to top ]

    Gavin, can't you just use an APT processor that validates any semantic rules you want? Is that ignorance or omission?In Java 6 the whole Java AST should be exposed as well so you could do semantic checks based on anything you like, not just annotation (hence it does not belong to the JSR 175)Pretty soon, this will also be integrated in Eclipse JDT ie incremental compilation, error reporting with line info and caracter info, and so on.It should then be fairly easy to come up with a generic APT processor whose rules are coming from some external representation to not rewrite (and harcode) pure semantic rules all the time - using some pointcuts like syntax and predicates.I am sure you'll find some links to read more on APT if I am too lazy to post them here.Just checkout Apache Beehive, I bet there are some semantic checks. APT is not just for code/artifact generation.

    This is just like saying that XML doesn't need DTDs or schemas because one particular commonly used XML editor allows you to write a plugins that know how to validate particular document types.

    If you think about what you are proposing for about 30 seconds you will see what is wrong with your suggestion.

    If we have a standard constraint language then we get

    (1) compile-time validation
    (2) contextual IDE autocompletion
    (3) an easy way to communicate (!!!!)

    all at once.

    (Perhaps in future you could make suggestions without insulting other people.)
  25. ignorant ??[ Go to top ]

    I completely agree a standardized annotation validation language would be nice, but isn't such a thing kind of far off? I don't think we're even sure what we want yet. What do we do in the mean time?
    This is just like saying that XML doesn't need DTDs or schemas because one particular commonly used XML editor allows you to write a plugins that know how to validate particular document types.

    1. JSR 269 (formerly known as APT) is a standard, not a "commonly used" pre-processor.

    2. XML can't require a specific-programming language for validation so they had to create a new one. It's OK for us to use Java. Just because XML uses XML to validate XML doesn't necessarily mean we have to use annotations to validate annotations (though I'm not discounting the idea).
    If you think about what you are proposing for about 30 seconds you will see what is wrong with your suggestion.If we have a standard constraint language then we
    get

    (1) compile-time validation
    (2) contextual IDE autocompletion
    (3) an easy way to communicate (!!!!) all at once.

    (Perhaps in future you could make suggestions without insulting other people.)

    Java provides for all 3 of those. Like I said, it's not ideal, but in the mean time, it's better than nothing.
  26. JSR 269[ Go to top ]

    Yeah JSR269 (http://www.jcp.org/en/jsr/detail?id=269) is the way forward. Actually I think it's a better mechanism than annotating annotations to add some validation constraints.

    Btw, Gavin I raised all your concerns 2 years ago :) Take a look: http://jroller.com/page/ara_e?entry=jsr175_thoughts and http://jroller.com/page/ara_e?entry=jsr175_redux

    Ara.
  27. mean times[ Go to top ]

    I completely agree a standardized annotation validation language would be nice, but isn't such a thing kind of far off?

    I don't see why. It could easily be defined in the scope of Java SE 6.
    What do we do in the mean time?

    The only "meantime" option we have is that all specs that define annotation libraries continue to describe the constraints in English (this is incredibly painful). And each spec vendor individually translates that English into diverse annotation processors, eclipse plugins, intellij plugins, etc. Meanwhile, book authors take that English and rewrite it into other English.

    Of course English is simply not a good language for expressing things like this, being necessarily verbose and ambiguous.
    Java provides for all 3 of those. Like I said, it's not ideal, but in the mean time, it's better than nothing.

    I don't understand. The "mean time" will last forever unless at some point we start trying to look for a solution. The Java 6 timeframe looks to me to be an excellent point to do that.

    I'm ofcourse not arguing that noone should use annotations until we have this functionality. I'm arguing that annotations are *so important* that we need to start plugging the holes as soon as possible.
  28. mean times[ Go to top ]

    The Java 6 timeframe looks to me to be an excellent point to do that.

    Count me in.
  29. Concrete suggestions[ Go to top ]

    This is just like saying that XML doesn't need DTDs or schemas because one particular commonly used XML editor allows you to write a plugins that know how to validate particular document types.

    I think I am following your idea. But are you thinking about something like a text-based schema language or along the lines of a reflection-like interface that allows annotations to provide hints to IDEs/compilation errors at runtime?
  30. Concrete suggestions[ Go to top ]

    This is just like saying that XML doesn't need DTDs or schemas because one particular commonly used XML editor allows you to write a plugins that know how to validate particular document types.
    I think I am following your idea. But are you thinking about something like a text-based schema language or along the lines of a reflection-like interface that allows annotations to provide hints to IDEs/compilation errors at runtime?

    It is usually considered "elegant" to use the language itself as the schema (meta) language.

    Some analogies:

    * In relational modelling, the schema is exposed as a set of relations (the catalog).

    * XML schemas are preferred to DTDs, because they are expressed in XML.

    * C compilers are written in C.

    Annotation constraints are certainly metadata (declaration) describing the annotations, ie. meta-meta-data. Presumably, people who believe that annotations are an excellent way to express metadata, will also believe that they are a good way to express meta-metadata. :-)

    When you use language X as the schema language for language X, then all the tools you write to grok X, will also grok X's schema language.

    JSR-175 already comes with one kind of constraint - the @Target meta-annotation - and it would be straightforward to extend the specification to include a much richer set of constraints.
  31. +1 for annotation validation[ Go to top ]

    I have previously blogged that annotations are not mainly about configuration, but all about mini-language creation. I wrote "APIs create kind of new mini-languages. Annotations put it a step further in a clearer way as annotations provide declarative definitions, and look like added new keywords".

    See my first post [The end of Enterprise JavaBeans (!?)] on the annotation rise in Mustang and the impact of MDA in particular in my second post [Questioning the impact of java annotations on MDA].

    Sometimes I see annotations as new Java keywords. So, yes, we have to question the validation of annotations. Using a (generic or not) APT processor is one way, but then, the language provided by annotations will look like a dynamically typed one; it could hurt productivity. I think we can do better, at least we can try to. Why not trying to follow Gavin's idea while using the annotation language itself as the schema (meta) language ?

    In my second post, I made a parallel view between Java annotations and UML tagged values. I did not put my hands in UML's OCL (Object Constraint Language), but if OCL enables to create constraints on tagged values, due to the similarity with Java annotations, an OCL-language-like may be worthwhile for annotation validation (another way to explore ?). May be common meta-annotations could be build on top of such OCL-like language.

    My 2 cents.
  32. OCL4JAVA[ Go to top ]

    Came across this when looking for something else entirely and thought of your post:

    OCL4Java

    Unfortunately it uses a preprocessing step like APT...
  33. +1 for annotation validation[ Go to top ]

    May be common meta-annotations could be build on top of such OCL-like language.
    Yes, it can be solved with single annotation for annotations ( it needs some predefined context like "targetClass" ). Same stuff can be used for any validation (only context is different).

    @Check (boolean expression)

    But this stuff can be implemented using APT too :)
  34. ignorant ??[ Go to top ]

    (Perhaps in future you could make suggestions without insulting other people.)

    I think luminaries folks like you should just not omitt one important part of the debate hence my thread title - no offense intended. Most of this can be done with APT, is not perfect as it could be and so on, but would still deserve a large quote on your blog post, possibly explaining in which areas APT as it will be in Java 6 won't be enough.

    FYI Beehive has both an APT processor to do validation and some meta annotation to define some constraints on the Beehive annotation exposed to the user - off course the beast that handles logic for the meta annotation is ... an APT processor. This would be interesting to hear from the Beehive team on that story.
  35. Why not test?[ Go to top ]

    Does the JSR provide for any validation of the runtime byte code manipulation that hibernate is dependant upon?
  36. Annotations and Validation[ Go to top ]

    Reading through this thread a couple of times I think there are a lot of valid points here. My take is that there are two parts to this. I think the apt-replacement that is coming in Mustang has a lot of promise. Since it will be folded into javac and have mechanisms to auto-discover annotation processors it will go a long way to plug the gap. But I don't think it completely fills the gap.

    Where I agree with Gavin is that there should be a well defined set of meta-annotations that govern the use of annotations, and which should be known the the Mustang javac. I think the @Occurs annotation is a great example of this - I know I've had to write code to check an annotation only exists on one method in a class. It would probably be quite straightfoward to come up with a decent list of other annotations for validating the use of annotations.

    I know a lot of people don't like JSP but I think custom tags have an approach to constraint validation that is somewhat similar. Through the TLD you can express quite a bit about a tag and it's attributes and containers and IDEs use this to inform the user of errors. Things that can't be expressed in the TLD can be expressed in a TagLibraryValidator class which the JSP translator will invoke prior to translation. This is good for catching requirements like requiring exactly one of two attributes to be specified. I'd be happy to see a similar approach to annotations. A well defined set of constraints in the language, and a mechanism to do custom constraint checking where the expressivity of the deined constraints falls short (as it always will somewhere).

    -Tim Fennell
    Stripes: Because web development should just be easier.
  37. Concrete solution[ Go to top ]

    Gavin makes good points here -- this is all infrastructure that's needed for annotation processing. And, I agree with Bob that in Java 5, this sort of checking should be done by writing annotation processors that run via apt (or even at run time). A great place to start would be to have available in Open Source a framework that provides:

      - a set of meta-meta annotations
      - a framework for validating them
      - a framework for adding custom, build-time annotation checks

    Apache Beehive has a good start here as part of the Controls sub-project which is a generalized framework for building annotated JavaBeans. Controls provides meta-meta annotations like:

      @AllowExternalOverride()
      @MembershipRule(value=MembershipRuleValues.AT_LEAST_ONE)
      @AnnotationMemberTypes.Date()
      ... and so on ...

    that can be used on annotation interfaces like:

      public @interface LastChanged {
          @AnnotationMemberTypes.Date(format="MM/dd/yyyy")
          public String date();
      }

    to provide additional annotation constraints and metadata. These annotations are checked at build-time using a custom annotation processor to enforce semantic checks. The set of meta-meta annotations could easily be extended to support the constraints Gavin describes. The framework also provides a generalized mechanism for adding a custom annotation "checker" that can be implemented by the developer of a Control's annotation interface.

    As a simple example, the annotation checker for Beehive's JdbcControl is here. This custom checker is used to check annotations like:

      @JdbcControl.SQL(statement="select id, name, ssn from employees where id={id}")
      public Employee getEmployeeByName(String id);

    to ensure that the SQL statement is not empty, that the "{id}" substitution matches a formal argument name, and (though it's not implemented) could even parse and validate the SQL itself. A custom checker is declared simply by adding this to a Control interface:

      @ControlInterface(checker=JdbcControlChecker.class)

    While this infrastructure is currently coupled to the Beehive Controls runtime, it embodies many concepts useful for annotation processing++. Speaking for myself, it would be interesting to discuss how to share some of the annotations and infrastructure here and to look at standardization in the future. Any interest in this?

    Personally, my biggest gripe about the APT tool in JDK 5 is that it's not possible to both generate a source file at annotation processing time *and* use the type information in the generated file in the same apt process. This problem forces some annotation processing scenarios to be two-pass and precludes using apt for compilation. Working around this problem can necessitate build-time weirdness like:

      - process annotations in some source files
        - generate source files
      - process annotations in more source files
        - generate source files
      - compile

    instead of:

      - process annotations, generate source files, and compile

    Blog post forthcoming on this topic.

    Looking forward to this being fixed in Java 6. :)

    Eddie
  38. Aspects to the rescue?[ Go to top ]

    Using Annotations in a Pointcut for AspectJ should be able to solve many of these issues if I understand correctly. Using this approach you could declare compilation errors if an annotation is at the wrong type using the full power of the AspectJ Pointcut language. And you can also use it to add behaviour only to certain parts of the code that were annotated.
  39. I think we need to distingusih between annotations in general and validation, each having its own support.

    In none of the posts sofar have I seen a single mentioning of Design-by-contract (DBC). DBC seems to me the right way of expressing validation constraints. But this requires support from the language itself. DBC also deals correctly with inheritance. In addition, DBC must be available for interfaces:
       interface X {
          @pre: a!=null && a.length() in [10,20]
          public abstract foo(String a);
       }
    (Details of constraint language could be modelled to make use of XML-schema notations for ease of use.)
    I would also like to be able to read validation parameters from a properties file (old-fashioned or XML):
       interface X {
          @pre: a!=null && a.length() in [#foo.min,#foo.max]
          public abstract foo(String a);
       }
    And, since it is part of the signature of a method, reflection support should be available.

    DBC is inherited, annotations not necessarily.

    Annotations would be left to provide further processing or meta-data information, e.g., for transaction level for frameworks like EJB.

    Werner
  40. A little behind...[ Go to top ]

    I am a little behind keeping up with what has been going on with annotations and the advantages that they offer for reducing complexity. However, as I am reading through this thread I am wondering if we are not increasing complexity dramatically. I was hoping that some of you guys that have been thinking through this stuff for a while could short circuit the process for me a little.

    Question 1:
    Clearly, unconstrained annotations are a mechanism for extending the language semantics, offering far more expressiveness. However, and I think that this is Gavin's point, without validation we loose the advantages of using a statically type checked language. In fact, it seems to demand both a syntax checker and a semantic checker. This way you can validate that the annotation is both syntactically correct and validate that the right set of annotations are being used based upon the context. As a result, it seems that it would be awfully difficult to declare the constraints in line with other annotations. Instead, the annotation use would need to reference a constraint document of some sort much in the way jstl tags (as mentioned in another post)or xml documents do. Since these constraints have to be checked at compile and run time, it would seem to make sense that they should be declared not in compiled code, but declaratively and in a way that is universally available (again like xml). I suppose using the style of annotations is fine, but ultimately does it not have to evolve to support the full semantics of a language in order to validate something as open ended as an annotation?

    Question 2:
    The example from beehive:

    looks interesting in the since that it can significantly shorten the amount of code required to do some tasks - though, it does seem to stretch the bounds of declarative a bit. What I am wondering is what kind of exception do you get if the sql statement produces a db error? What would that stack trace look like? It seems that the stack trace would be very deceptive and as a result would create code that is difficult to maintain.

    Another example is using annotations for validation. What if the validation code throws and exception? How do you track down the errors? How does somebody 5 years from now that didn't write the code figure out what is breaking?

    If you were to generalize this more, I suppose the question is how do you constrain annotations to be just declarative and do you want to?

    Thanx
    LES
  41. A little behind...[ Go to top ]

    Les, great set of questions. Few comments below.
    I am a little behind keeping up with what has been going on with annotations and the advantages that they offer for reducing complexity. However, as I am reading through this thread I am wondering if we are not increasing complexity dramatically. I was hoping that some of you guys that have been thinking through this stuff for a while could short circuit the process for me a little.

    This is a fair point -- annotation complexity and overkill are often raised as big issues with using metadata. That being said, the thing to remember in this case is that the number of annotation interfaces (@interface) should be small in comparison to their use. Thus, the metadata being discussed here is really meant for folks developing @interfaces -- not for the user of an annotation. If one is using such annotations, the annotation validation infrastructure will report errors when constraints are violated. So, these annotations actually make development easier because they allow the processors to report a richer set of errors when an @interface is misused.
    Question 1:Clearly, unconstrained annotations are a mechanism for extending the language semantics, offering far more expressiveness. However, and I think that this is Gavin's point, without validation we loose the advantages of using a statically type checked language. In fact, it seems to demand both a syntax checker and a semantic checker. This way you can validate that the annotation is both syntactically correct and validate that the right set of annotations are being used based upon the context. As a result, it seems that it would be awfully difficult to declare the constraints in line with other annotations.

    I think you're right here. I draw a distinction between the constraints placed on a single annotation and those needed when mixing annotatoins. The checks I've been talking about are really about making sure that a single annotation is used appropriately. Within sets of annotations, it's also possible to enforce intra-annotation checks, but I don't think that we understand how to enforce constraints between annotations sets that are unaware of each other. Meta-meta-metadata, maybe? :) Presumably, containers could accept specific sets of metadata and perform cross-annotation validation at deployment time.
    Question 2:The example from beehive:looks interesting in the since that it can significantly shorten the amount of code required to do some tasks - though, it does seem to stretch the bounds of declarative a bit. What I am wondering is what kind of exception do you get if the sql statement produces a db error? What would that stack trace look like?

    Assuming one wrote the method as:

    @JdbcControl.SQL(statement="select id, name, ssn from employees where id={id}")
    public Employee getEmployeeByName(String id) throws SQLException;

    the caller would receive the SQLException as expected, and it would be the exact SQLException thrown from the PreparedStatement, etc. The stack trace would be intact and would just originate from wherever the failure occurred. It wouldn't behave much differently than hand-authored JDBC. Without the "throws" clause declared, a runtime exception called a ControlException would be thrown that would wrap the specific cause; this is just documented as part of the Controls framework.
    Another example is using annotations for validation. What if the validation code throws and exception? How do you track down the errors? How does somebody 5 years from now that didn't write the code figure out what is breaking?

    In the Beehive-specific case, the validation code wouldn't actually throw an exception but would report an error through APT. When using the apt tool, this would cause the build to fail by reporting an error, line number, etc similar to a compilation error when using javac.