I provided the annotation, its registration in the XML layer, the DataEditorSupport implementation, and its addition to the DataObject's Lookup. All this is very standard code, nothing special at all. The contribution by Andreas was more complex—parsing the XML document to the point where the error occurs. Here, a dummy value is filled in. Eventually all the allowed values will be compared against whatever is typed in as the object attribute's value. The method returns the offset of the value that is wrong, as well as the length of the value:
public int[] findError(Document doc) {
int[] errors = null;
BaseDocument bdoc = (BaseDocument) doc;
ExtSyntaxSupport sup = (ExtSyntaxSupport)bdoc.getSyntaxSupport();
boolean prev = false;
try {
TokenItem token = sup.getTokenChain(0, 1);
while (token!=null) {
if ("attribute".equals(token.getTokenID().getName())) {
String attr = token.getImage();
if ("object".equals(attr)) {
TokenItem next = token.getNext().getNext();
if (next!=null && "value".equals(next.getTokenID().getName())) {
String val = next.getImage();
if (val.startsWith("\\"service:tapestry.")) {
errors = new int[2];
errors[0] = next.getOffset() + 1;
errors[1] = next.getNext().getOffset() - next.getOffset();
}
}
}
}
token = token.getNext();
}
} catch (BadLocationException ex) {
ex.printStackTrace();
}
return errors;
}
However, note that here BaseDocument.getSyntaxSupport() is used to get the tokens. There's no other way to do this, as far as I'm aware, although using this approach is not ideal since the Lexer module provides a different way of working with tokens. So, this will have to be rewritten once the Lexer module becomes the official way of working with tokens.