Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
33 changes: 32 additions & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,41 @@
</scm>

<modules>
<module>tensorflow-tools</module>
<module>tensorflow-core</module>
<!--module>tensorflow-utils</module-->
<!--module>tensorflow-frameworks</module> TODO -->
<!--module>tensorflow-starters</module> TODO -->
</modules>

<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<junit.version>4.12</junit.version>
<jmh.version>1.21</jmh.version>
</properties>

<dependencyManagement>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>${jmh.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>${jmh.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>

<!-- Two profiles are used:
ossrh - deploys to ossrh/maven central
bintray - deploys to bintray/jcenter. -->
Expand Down Expand Up @@ -64,6 +93,7 @@
</distributionManagement>
</profile>
</profiles>

<!-- http://central.sonatype.org/pages/requirements.html#developer-information -->
<developers>
<developer>
Expand All @@ -72,6 +102,7 @@
<organizationUrl>http://www.tensorflow.org</organizationUrl>
</developer>
</developers>

<build>
<plugins>
<!-- GPG signed components: http://central.sonatype.org/pages/apache-maven.html#gpg-signed-components -->
Expand Down
34 changes: 19 additions & 15 deletions tensorflow-core/tensorflow-core-api/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,11 @@
<version>${project.version}</version>
<optional>true</optional> <!-- for compilation only -->
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-tools</artifactId>
<version>${project.version}</version>
</dependency>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That feels strange. Why do we need this dependency here? Is it just because that we can't have a separate module until we move away from the code generator in C++?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor are now based on NdArray for mapping their memory and allowing direct access to it, so it must depend on nio-utils. I'm not sure to understand what is strange, you meant that memory mapping should occur outside the core API?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's just the names that are confusing me. When I read something like tensorflow-core, I think like this means the actual "core" without dependencies on other modules. Maybe it's just that tensorflow-nio should really be named something else and not "TensorFlow"? That might also help with adoption from other projects. That's probably something we should continue on the mailing list you created about that and one of the first things to clear up with guys from MXNet, DL4J, etc.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, that's why I renamed the artifact tensorflow-nio-utils to nio-utils (though I'm not a huge fan of that name neither so if anyone comes up with a better one...). So the tensorflow you've noticed there is just to mention our organization in the group and does not name the artifact itself.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ndarray-core? I think nio-utils is likely to make people think of java.nio.* and as the language moves away from those classes (and they are 15+ years old at this point), calling them "new" is something of a misnomer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, we will group artifacts if there is a need but right now, I cannot think of any other utility library than this one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be a functional need to group things together. If it's just to put them in abstract categories, it will make it hard to decide which category each module belongs to, possibly without any actual benefits. Maven doesn't even put the parent name in artifact coordinates, so unless we do it explicitly like with tensorflow-core -> tensorflow-core-api, etc then the benefits are even more marginal.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, here is what I ended up to do: I preserved the tensorflow-utils name but now it contains the code of the library itself, including DataBuffer and NdArray APIs.

So if in the future it happens that we have more small utility classes like these that we would like to share with the world and that are independent from TF runtime, they will end up in this library as well (kind of a Guava for ML).

There is a lot of presumptions and questions that makes it hard to pick the right name for this library right away, so let's simply rename it in the future if we need to.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, but let's name it tensorflow-util just to be consistent with the package name? Or inversely, let's name the package org.tensorflow.utils? I don't have preference for either, I just think that consistency is a good thing to have whenever it makes sense.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is fine to have them mismatch here, tensorflow-utils sounds more natural for the name of library (as there is more than one utility class in it) while package names are often singular by convention (e.g. java.util.*).

In addition, we don't have a org.tensorflow.core package neither in our core artifacts.

<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
Expand Down Expand Up @@ -71,10 +76,6 @@
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
<executions>
<execution>
<id>default-compile</id>
Expand All @@ -94,7 +95,7 @@
</goals>
<configuration>
<includes>
<include>org/tensorflow/c_api/presets/*.java</include>
<include>org/tensorflow/internal/c_api/presets/*.java</include>
</includes>
</configuration>
</execution>
Expand Down Expand Up @@ -199,7 +200,7 @@
<configuration>
<skip>${javacpp.parser.skip}</skip>
<outputDirectory>${project.basedir}/src/gen/java</outputDirectory>
<classOrPackageName>org.tensorflow.c_api.presets.*</classOrPackageName>
<classOrPackageName>org.tensorflow.internal.c_api.presets.*</classOrPackageName>
</configuration>
</execution>
<execution>
Expand All @@ -209,9 +210,9 @@
<goal>build</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/native/org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/</outputDirectory>
<outputDirectory>${project.build.directory}/native/org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/</outputDirectory>
<skip>${javacpp.compiler.skip}</skip>
<classOrPackageName>org.tensorflow.c_api.**</classOrPackageName>
<classOrPackageName>org.tensorflow.internal.c_api.**</classOrPackageName>
<copyLibs>true</copyLibs>
<copyResources>true</copyResources>
</configuration>
Expand All @@ -222,6 +223,9 @@
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<argLine>
-Djava.library.path=${project.build.directory}/native/org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}
</argLine>
<additionalClasspathElements>${project.build.directory}/native/</additionalClasspathElements>
</configuration>
</plugin>
Expand Down Expand Up @@ -254,16 +258,16 @@
<!-- In case of successive builds for multiple platforms
without cleaning, ensures we only include files for
this platform. -->
<include>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/</include>
<include>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/</include>
</includes>
<classesDirectory>${project.build.directory}/native</classesDirectory>
<excludes>
<exclude>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/*.exp</exclude>
<exclude>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/*.lib</exclude>
<exclude>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/*.obj</exclude>
<exclude>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/*mklml*</exclude>
<exclude>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/*iomp5*</exclude>
<exclude>org/tensorflow/c_api/${javacpp.platform}${javacpp.platform.extension}/*msvcr120*</exclude>
<exclude>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/*.exp</exclude>
<exclude>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/*.lib</exclude>
<exclude>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/*.obj</exclude>
<exclude>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/*mklml*</exclude>
<exclude>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/*iomp5*</exclude>
<exclude>org/tensorflow/internal/c_api/${javacpp.platform}${javacpp.platform.extension}/*msvcr120*</exclude>
</excludes>
</configuration>
</execution>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,8 @@ op {
endpoint {
name: "collective.BroadcastRecv"
}
out_arg: {
name: "data"
rename_to: "output"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,8 @@ op {
endpoint {
name: "collective.BroadcastSend"
}
out_arg: {
name: "data"
rename_to: "output"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
op {
graph_op_name: "CollectiveGather"
out_arg: {
name: "data"
rename_to: "output"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,8 @@ op {
endpoint {
name: "collective.AllReduce"
}
out_arg: {
name: "data"
rename_to: "output"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
op {
graph_op_name: "KafkaDataset"
endpoint {
name: "data.KafkaDataset"
}
}
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
op {
graph_op_name: "NcclAllReduce"
out_arg: {
name: "data"
rename_to: "output"
}
}
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
op {
graph_op_name: "NcclReduce"
out_arg: {
name: "data"
rename_to: "output"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -97,22 +97,25 @@ class Type {
static Type IterableOf(const Type& type) {
return Interface("Iterable").add_parameter(type);
}
static Type DataTypeOf(const Type& type) {
return Class("DataType", "org.tensorflow").add_parameter(type);
}
static Type ForDataType(DataType data_type) {
switch (data_type) {
case DataType::DT_BOOL:
return Class("Boolean");
return Class("TBool", "org.tensorflow.types");
case DataType::DT_STRING:
return Class("String");
return Class("TString", "org.tensorflow.types");
case DataType::DT_FLOAT:
return Class("Float");
return Class("TFloat", "org.tensorflow.types");
case DataType::DT_DOUBLE:
return Class("Double");
return Class("TDouble", "org.tensorflow.types");
case DataType::DT_UINT8:
return Class("UInt8", "org.tensorflow.types");
return Class("TUInt8", "org.tensorflow.types");
case DataType::DT_INT32:
return Class("Integer");
return Class("TInt32", "org.tensorflow.types");
case DataType::DT_INT64:
return Class("Long");
return Class("TInt64", "org.tensorflow.types");
case DataType::DT_RESOURCE:
// TODO(karllessard) create a Resource utility class that could be
// used to store a resource and its type (passed in a second argument).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,24 +124,16 @@ void WriteSetAttrDirective(const AttributeSpec& attr, bool optional,
.EndLine()
.BeginBlock("for (int i = 0; i < " + array_name + ".length; ++i)")
.Append(array_name + "[i] = ");
if (attr.type().kind() == Type::GENERIC) {
writer->Append("DataType.fromClass(" + var_name + ".get(i));");
} else {
writer->Append(var_name + ".get(i);");
}
writer->Append(var_name + ".get(i);");
writer->EndLine()
.EndBlock()
.Append("opBuilder.setAttr(\"" + attr.op_def_name() + "\", ")
.Append(array_name + ");")
.EndLine();
} else {
writer->Append("opBuilder.setAttr(\"" + attr.op_def_name() + "\", ");
if (attr.var().type().name() == "Class") {
writer->Append("DataType.fromClass(" + var_name + "));");
} else {
writer->Append(var_name + ");");
}
writer->EndLine();
writer->Append("opBuilder.setAttr(\"" + attr.op_def_name() + "\", ")
.Append(var_name + ");")
.EndLine();
}
}

Expand Down Expand Up @@ -179,7 +171,7 @@ void RenderSecondaryFactoryMethod(const OpSpec& op, const Type& op_class,
if (attr.type().kind() == Type::GENERIC &&
default_types.find(attr.type().name()) != default_types.end()) {
factory_statement << default_types.at(attr.type().name()).name()
<< ".class";
<< ".DTYPE";
} else {
AddArgument(attr.var(), attr.description(), &factory, &factory_doc);
factory_statement << attr.var().name();
Expand Down Expand Up @@ -345,11 +337,10 @@ void RenderInterfaceImpl(const OpSpec& op, RenderMode mode,

if (mode == OPERAND) {
bool cast2obj = output.type().wildcard();
Type return_type =
Type::Class("Output", "org.tensorflow")
.add_parameter(cast2obj ? Type::Class("Object") : output.type());
Type return_type = Type::Class("Output", "org.tensorflow")
.add_parameter(cast2obj ? Type::Class("TType", "org.tensorflow.types.family") : output.type());
Method as_output = Method::Create("asOutput", return_type)
.add_annotation(Annotation::Create("Override"));
.add_annotation(Annotation::Create("Override"));
if (cast2obj) {
as_output.add_annotation(
Annotation::Create("SuppressWarnings").attributes("\"unchecked\""));
Expand All @@ -365,7 +356,7 @@ void RenderInterfaceImpl(const OpSpec& op, RenderMode mode,
} else if (mode == LIST_OPERAND) {
Type operand = Type::Interface("Operand", "org.tensorflow");
if (output.type().wildcard()) {
operand.add_parameter(Type::Class("Object"));
operand.add_parameter(Type::Class("TType", "org.tensorflow.types.family"));
} else {
operand.add_parameter(output.type());
}
Expand Down Expand Up @@ -429,7 +420,7 @@ void GenerateOp(const OpSpec& op, const EndpointSpec& endpoint,
RenderMode mode = DEFAULT;
if (op.outputs().size() == 1) {
const ArgumentSpec& output = op.outputs().front();
Type operand_type(output.type().wildcard() ? Type::Class("Object")
Type operand_type(output.type().wildcard() ? Type::Class("TType", "org.tensorflow.types.family")
: output.type());
Type operand_inf(Type::Interface("Operand", "org.tensorflow")
.add_parameter(operand_type));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,8 @@ class TypeResolver {
if (next_generic_letter_ > 'Z') {
next_generic_letter_ = 'A';
}
return Type::Generic(string(1, generic_letter));
return Type::Generic(string(1, generic_letter))
.add_supertype(Type::Class("TType", "org.tensorflow.types.family"));
}
};

Expand Down Expand Up @@ -148,7 +149,7 @@ std::pair<Type, Type> TypeResolver::TypesOf(const OpDef_AttrDef& attr_def,
types = MakeTypePair(Type::Class("Boolean"), Type::Boolean());

} else if (attr_type == "shape") {
types = MakeTypePair(Type::Class("Shape", "org.tensorflow"));
types = MakeTypePair(Type::Class("Shape", "org.tensorflow.tools"));

} else if (attr_type == "tensor") {
types = MakeTypePair(Type::Class("Tensor", "org.tensorflow")
Expand All @@ -157,7 +158,7 @@ std::pair<Type, Type> TypeResolver::TypesOf(const OpDef_AttrDef& attr_def,
} else if (attr_type == "type") {
Type type = *iterable_out ? Type::Wildcard() : NextGeneric();
if (IsRealNumbers(attr_def.allowed_values())) {
type.add_supertype(Type::Class("Number"));
type.add_supertype(Type::Class("TNumber", "org.tensorflow.types.family"));
}
types = MakeTypePair(type, Type::Enum("DataType", "org.tensorflow"));

Expand Down Expand Up @@ -305,7 +306,7 @@ AttributeSpec CreateAttribute(const OpDef_AttrDef& attr_def,
bool iterable = false;
std::pair<Type, Type> types = type_resolver->TypesOf(attr_def, &iterable);
Type var_type = types.first.kind() == Type::GENERIC
? Type::ClassOf(types.first)
? Type::DataTypeOf(types.first)
: types.first;
if (iterable) {
var_type = Type::ListOf(var_type);
Expand Down
Loading