Skip to content

Commit

Permalink
Added ability to trim values as record elements are read and added ne…
Browse files Browse the repository at this point in the history
…w tests to ensure beans weren't overwriting themselves.
  • Loading branch information
ahenson committed Jun 22, 2016
1 parent 2fe8cdf commit eb02d73
Show file tree
Hide file tree
Showing 25 changed files with 301 additions and 54 deletions.
19 changes: 10 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,19 +44,19 @@ Work Completed
* Added use of the foreach construct `for(String item : collection)`
* Added use of `Streams` and `Lambdas` where it mae code cleaners
* Cleaned up some comments
* Removed com.blackbear.flatworm.Callback - use ExceptionCallback or RecordCallback instead (moved to new callbacks package).
* Removed com.blackbear.flatworm.Callback - use ExceptionCallback or RecordCallback instead (moved to new callbacks package)
* Added ability to use a JavaScript snippet to see if a line should be processed by a given Record (can also specify a script file and method name to keep scripts external to code)
* Added ability to specify ignore-field on a record-element to explicitly ignore it.
* Changed the record-element "type" attribute to "converter-name" as that's what it's really linked to.
* Changed the minlength/maxlength attributes for the length-ident element to min-length/max-length for consistency.
* On Field Identity (field-ident) - added ignore-case tag to indicate whether or not case should play a factor in comparison.
* Added ability to specify ignore-field on a record-element to explicitly ignore it
* Changed the record-element "type" attribute to "converter-name" as that's what it's really linked to
* Changed the minlength/maxlength attributes for the length-ident element to min-length/max-length for consistency
* On Field Identity (field-ident) - added ignore-case tag to indicate whether or not case should play a factor in comparison
* Added support for single segment-element configurations where the child doesn't have to be a collection
* Added support for non-delimited segment-elements - a "child" line can be a non-delimited line
* Added line identifiers
* Added annotation support
* Added ability to auto-resolve the converter type based upon the field's type (given that it's a common type in String, Double, Float, Long, or Integer).
* Added more constants where appropriate. There is likely more that can be done here.
* Added support for scripts to be executed before a record is read and after a record is read - which allows for dynamic reconfiguration of a FileFile during parsing - some files specify their parsing rules within the file so static configuration must be updated at run time.
* Added more constants where appropriate. There is likely more that can be done here
* Added support for scripts to be executed before a record is read and after a record is read - which allows for dynamic reconfiguration of a FileFile during parsing - some files specify their parsing rules within the file so static configuration must be updated at run time
* Record
* Before record is read:
* Parameters: `(FileFormat fileFormat, String line)`
Expand All @@ -71,10 +71,11 @@ Work Completed
* After line is read:
* Parameters: `(LineBO line, String inputLine, Map<String, Object> beans, ConversionHelper conversionHelper)`
* Return: `ignored`
* Added ability to specify multiple configuration options and then specify the preferred one at run time.
* Added ability to specify multiple configuration options and then specify the preferred one at run time
* Added ability to create Line identifiers (vs. them inheriting purely from the record along). The Script Identity script will take three parameters:
* Parameters: `(FileFormat fileFormat, LineBO line, String line)`
* Added support to "optional" lines. Meaning, the parser doesn't "skip" a line if a LineBO has an Identity set for lineIdentity but the line has no data present for a given record.
* When using the Field Identity, start position is no longer required for Record Elements as it can be auto-derived from the Field Identity's fieldLength property

TODOs
-------
Expand All @@ -86,7 +87,7 @@ TODOs
* Add missing JavaDocs
* Add ability for folks to write their own Identity implementations and make them annotation enabled. Right now they would have to build their own annotation configuration loader and extend the relevant parts - which is fine, but this can be done more cleanly I think.
* Add more verbose logging
* Is the field-length attribute really needed on Field Identity? We know the length by the matching strings. They should all be the same length else the match will always fail.
* Is the field-length attribute really needed on Field Identity? We know the length by the matching strings. They should all be the same length else the match will always fail

[flatworm 3.0.2]: https://github.com/trx/flatworm
[flatworm 4.0.0-SNAPSHOT]: https://github.com/ahenson/flatworm
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@
public class PropertyUtilsMappingStrategy implements BeanMappingStrategy {
@Override
public void mapBean(Object bean, String beanName, String property, Object value,
Map<String, ConversionOptionBO> conv) throws FlatwormParserException {
Map<String, ConversionOptionBO> conversionOption) throws FlatwormParserException {
try {
ConversionOptionBO option = conv.get("append");
ConversionOptionBO option = conversionOption.get("append");
if (option != null && "true".equalsIgnoreCase(option.getValue())) {
Object currentValue = PropertyUtils.getProperty(bean, property);
if (currentValue != null)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@

package com.blackbear.flatworm.annotations;

import com.blackbear.flatworm.CardinalityMode;

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
Expand All @@ -40,19 +38,21 @@
String encoding() default "UTF-8";

DataIdentity identity() default @DataIdentity;

Converter[] converters() default {};

Line[] lines() default { @Line() };
Line[] lines() default {};

/**
* A scriptlet to execute prior to reading/parsing the next record.
*
* @return the {@link Scriptlet} configuration.
*/
Scriptlet beforeReadRecordScript() default @Scriptlet;

/**
* A scriptlet to execute after reading/parsing a record.
*
* @return the {@link Scriptlet} configuration.
*/
Scriptlet afterReadRecordScript() default @Scriptlet;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@

int lineIndex() default -1;

boolean trimValue() default true;

ConversionOption[] conversionOptions() default {};
}
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Optional;

import lombok.Getter;
import lombok.Setter;
Expand Down Expand Up @@ -138,26 +139,52 @@ public void parseInput(String inputLine, Map<String, Object> beans, ConversionHe
parseInputDelimited(inputLine, identity);
}
} else {
int charPos = 0;
parseInput(inputLine, elements, charPos, identity);
// This is to help keep the configuration shorter in terms of what fields are required.
int charPos = getStartingPosition(elements, identity);

parseInput(inputLine, elements, charPos);
}

if (afterScriptlet != null) {
afterScriptlet.invokeFunction(this, inputLine, beans, conversionHelper);
}
}

/**
* Determine the starting position for parsing the line using a non-delimited approach.
*
* @param elements The {@link LineBO} elements.
* @param identity The {@link Identity} instance that was used to identify the line.
* @return the starting position for parsing the line.
*/
private int getStartingPosition(List<LineElement> elements, Identity identity) {
int startPosition = 0;
if (identity instanceof LineTokenIdentity) {
Optional<RecordElementBO> recordElement = elements.stream()
.filter(element -> element instanceof RecordElementBO)
.map(RecordElementBO.class::cast)
.findFirst();

if (recordElement.isPresent()
&& (!recordElement.get().isFieldStartSet() || recordElement.get().getFieldStart() < 0)) {
LineTokenIdentity lineTokenIdentity = LineTokenIdentity.class.cast(identity);
startPosition = lineTokenIdentity.getLineParsingStartingPosition();
}
}

return startPosition;
}

/**
* Parse out the content of the line based upon the configured {@link RecordElementBO} and {@link SegmentElementBO} instances.
*
* @param inputLine The line of data to parse.
* @param lineElements The {@link LineElement} instances that drive how the line of data will be parsed.
* @param charPos The character position of the line to begin at.
* @param identity The {@link Identity} instance used to determine that this {@link LineBO} instance should parse this line.
* @return The last characater position of the line that was processed.
* @return The last character position of the line that was processed.
* @throws FlatwormParserException should the parsing fail for any reason.
*/
private int parseInput(String inputLine, List<LineElement> lineElements, int charPos, Identity identity)
private int parseInput(String inputLine, List<LineElement> lineElements, int charPos)
throws FlatwormParserException {
for (LineElement lineElement : lineElements) {
if (lineElement instanceof RecordElementBO) {
Expand Down Expand Up @@ -186,7 +213,7 @@ private int parseInput(String inputLine, List<LineElement> lineElements, int cha
}
} else if (lineElement instanceof SegmentElementBO) {
SegmentElementBO segmentElement = (SegmentElementBO) lineElement;
charPos = parseInput(inputLine, segmentElement.getLineElements(), charPos, identity);
charPos = parseInput(inputLine, segmentElement.getLineElements(), charPos);
captureSegmentBean(segmentElement);
}
}
Expand All @@ -206,6 +233,10 @@ private void mapField(String fieldChars, RecordElementBO recordElement) throws F
String property = cardinality.getPropertyName();
Object bean = beans.get(beanRef);

if (recordElement.isTrimValue()) {
fieldChars = fieldChars.trim();
}

Object value;
if (!StringUtils.isBlank(recordElement.getConverterName())) {
// Using the configuration based approach.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,11 @@ public interface LineTokenIdentity extends Identity {
* @return {@code true} if it matches and {@code false} if not.
*/
boolean matchesIdentity(String token);

/**
* Return the start position within a line of data that parsing should begin once the line has been identified by this
* {@link LineTokenIdentity} implementation.
* @return The starting position of where parsing should begin on a data line.
*/
int getLineParsingStartingPosition();
}
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ public Map<String, Object> parseRecord(String firstLine, BufferedReader in,
if (!linesWithIdentities.isEmpty()) {
boolean continueParsing = true;
do {
lastReadLine = in.readLine();
lastReadLine = parsedLastReadLine ? in.readLine() : lastReadLine;
Optional<LineBO> matchingLine = linesWithIdentities
.stream()
.filter(line -> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,10 @@ public class RecordElementBO implements LineElement {
@Setter
private Integer order;

@Getter
@Setter
private boolean trimValue;

// The elements are queried, there are just multiple layers of abstraction that the compiler can't see.
@Getter
@Setter
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -535,6 +535,8 @@ public void processFieldAnnotations(RecordBO record, Class<?> clazz) throws Flat
LineBO line = loadLine(annotatedLine);
loadForProperty(annotatedLine.forProperty(), line);

addBeanToRecord(clazz, record);

Class<?> fieldType = Util.getActualFieldType(field);
line.getCardinality().setParentBeanRef(clazz.getName());
line.getCardinality().setBeanRef(fieldType.getName());
Expand Down Expand Up @@ -572,21 +574,16 @@ public RecordElementBO loadRecordElement(RecordBO record, Class<?> clazz, Field

try {
// See if the bean has been registered.
if (!record.getRecordDefinition().getBeanMap().containsKey(clazz.getName())) {
BeanBO bean = new BeanBO();
bean.setBeanName(clazz.getName());
bean.setBeanClass(clazz.getName());
bean.setBeanObjectClass(clazz);
record.getRecordDefinition().addBean(bean);
}
addBeanToRecord(clazz, record);

CardinalityBO cardinality = new CardinalityBO();
cardinality.setBeanRef(clazz.getName());
cardinality.setPropertyName(field.getName());
cardinality.setCardinalityMode(CardinalityMode.SINGLE);
recordElement.setCardinality(cardinality);
recordElement.setConverterName(annotatedElement.converterName());

recordElement.setConverterName(annotatedElement.converterName());
recordElement.setTrimValue(annotatedElement.trimValue());
recordElement.setOrder(annotatedElement.order());

if (annotatedElement.length() != -1) {
Expand Down Expand Up @@ -618,6 +615,21 @@ public RecordElementBO loadRecordElement(RecordBO record, Class<?> clazz, Field
return recordElement;
}

/**
* Create a {@link BeanBO} entry for the given {@code clazz}.
* @param clazz the class from which to construct a new {@link BeanBO} instance.
* @param record the {@link RecordBO} instance to which a new {@link BeanBO} instance will be added.
*/
public void addBeanToRecord(Class<?> clazz, RecordBO record) {
if (!record.getRecordDefinition().getBeanMap().containsKey(clazz.getName())) {
BeanBO bean = new BeanBO();
bean.setBeanName(clazz.getName());
bean.setBeanClass(clazz.getName());
bean.setBeanObjectClass(clazz);
record.getRecordDefinition().addBean(bean);
}
}

/**
* Load the {@link SegmentElement} metadata and associated {@link RecordElement} data (and so on) for the given {@code Field} within the
* given {@code clazz}. Due to the tree-like structure of {@link SegmentElementBO} instances, this could result in several recursive
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -675,6 +675,8 @@ protected RecordElementBO readRecordElement(Node node) throws FlatwormConfigurat
Node beanref = getAttributeNamed(node, "beanref");
Node converterName = getAttributeNamed(node, "converter-name");
Node ignoreField = getAttributeNamed(node, "ignore-field");

Node trimValue = getAttributeNamed(node, "trim-value");

if (start != null) {
recordElement.setFieldStart(Util.tryParseInt(start.getNodeValue()));
Expand Down Expand Up @@ -702,6 +704,9 @@ protected RecordElementBO readRecordElement(Node node) throws FlatwormConfigurat
if(ignoreField != null) {
recordElement.setIgnoreField(Util.tryParseBoolean(ignoreField.getNodeValue()));
}
if(trimValue != null) {
recordElement.setTrimValue(Util.tryParseBoolean(trimValue.getNodeValue()));
}

readConversionOptions(node, recordElement);
return recordElement;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ public void addMatchingString(String matchingString) {
if (ignoreCase) {
valueToAdd = valueToAdd.toLowerCase();
}

matchingStrings.add(valueToAdd);
}

Expand Down Expand Up @@ -186,6 +186,11 @@ public boolean matchesIdentity(String token) {
return matchingStrings.contains(tokenToTest);
}

@Override
public int getLineParsingStartingPosition() {
return fieldLength;
}

@Override
public String toString() {
return "FieldIdentityImpl{" +
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,14 @@
import org.junit.Test;

import java.util.HashMap;
import java.util.Map;

import static org.junit.Assert.assertEquals;

public class UtilTest {
@Test
public void testRemoveBlanks() {
HashMap options = new HashMap();
Map<String, ConversionOptionBO> options = new HashMap<>();
assertEquals("foo", Util.justify("foo ", "both", options, 0));
assertEquals("foo", Util.justify("foo ", "both", options, 0));
assertEquals("foo", Util.justify(" foo", "both", options, 0));
Expand All @@ -38,7 +39,7 @@ public void testRemoveBlanks() {

@Test
public void testMultiplePadCharacters() {
HashMap options = new HashMap();
Map<String, ConversionOptionBO> options = new HashMap<>();
options.put("pad-character", new ConversionOptionBO("pad-character", "0Oo"));
assertEquals("f", Util.justify("foo", "both", options, 0));
assertEquals("f", Util.justify("fooOO00", "both", options, 0));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,13 +58,21 @@ public void validateLines(RecordDefinitionBO recordDefinition, int expectedLines
assertFalse("Empty RecordDefinition.lines.", recordDefinition.getLines().isEmpty());
assertEquals("RecordDefinition.lines size is incorrect.", expectedLinesCount, recordDefinition.getLines().size());
}
else {
assertFalse("RecordDefinition.lines has entries when it's not supposed to.",
recordDefinition.getLines() != null && !recordDefinition.getLines().isEmpty());
}

if (expectedLinesWithIdentitiesCount > 0) {
assertNotNull("Null RecordDefinition.linesWithIdentities.", recordDefinition.getLinesWithIdentities());
assertFalse("Empty RecordDefinition.linesWithIdentities.", recordDefinition.getLinesWithIdentities().isEmpty());
assertEquals("RecordDefinition.linesWithIdentities size is incorrect.", expectedLinesWithIdentitiesCount,
recordDefinition.getLinesWithIdentities().size());
}
else {
assertFalse("RecordDefinition.linesWithIdentities has entries when it's not supposed to.",
recordDefinition.getLinesWithIdentities() != null && !recordDefinition.getLinesWithIdentities().isEmpty());
}
}

public void validateLine(LineBO line, String expectedDelimiter, char expectedQuotChar) {
Expand All @@ -73,15 +81,18 @@ public void validateLine(LineBO line, String expectedDelimiter, char expectedQuo
assertEquals("Wrong delimiter", expectedDelimiter, line.getDelimiter());
}

public void validateRecord(RecordBO record, Class<?> expectedClass) {
validateRecord(record, expectedClass.getSimpleName());
public void validateRecord(RecordBO record, Class<?> expectedClass, boolean expectRecordIdentity) {
validateRecord(record, expectedClass.getSimpleName(), expectRecordIdentity);
}

public void validateRecord(RecordBO record, String expectedName) {
public void validateRecord(RecordBO record, String expectedName, boolean expectRecordIdentity) {
assertNotNull(String.format("%s bean is null.", expectedName), record);
assertFalse(String.format("%s.name was not loaded.", expectedName), StringUtils.isBlank(record.getName()));
assertNotNull(String.format("%s.recordIdentity was not loaded.", record.getName()), record.getRecordIdentity());
assertNotNull(String.format("%s.recordDefinition was not loaded.", record.getName()), record.getRecordDefinition());

if (expectRecordIdentity) {
assertNotNull(String.format("%s.recordIdentity was not loaded.", record.getName()), record.getRecordIdentity());
}
}

public void validateRecordDefinition(RecordBO record, int expectedLinesCount, int expectedLinesWithIdentitiesCount) {
Expand Down
Loading

0 comments on commit eb02d73

Please sign in to comment.