TensorFlow is an open-source machine learning software library for numerical computation using data flow graphs. It was originally developed by Google.
All the above systems work quite well on big data transformations whereas Spark really shines with its bigger API support and its ability to read from and write to multiple data sources. Using Spark one can easily switch between declarative versus imperative versus functional …
Can't seem to choose any deep learning platform in the above, so I'll list it here: 1. Apache MXNet: this has been used for one of our main algorithms for search as an end-to-end pipeline. We chose this because of the Scala bindings, which makes it easier to integrate with out …
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
TensorFlow is great for most deep learning purposes. This is especially true in two domains: 1. Computer vision: image classification, object detection and image generation via generative adversarial networks 2. Natural language processing: text classification and generation. The good community support often means that a lot of off-the-shelf models can be used to prove a concept or test an idea quickly. That, and Google's promotion of Colab means that ideas can be shared quite freely. Training, visualizing and debugging models is very easy in TensorFlow, compared to other platforms (especially the good old Caffe days). In terms of productionizing, it's a bit of a mixed bag. In our case, most of our feature building is performed via Apache Spark. This means having to convert Parquet (columnar optimized) files to a TensorFlow friendly format i.e., protobufs. The lack of good JVM bindings mean that our projects end up being a mix of Python and Scala. This makes it hard to reuse some of the tooling and support we wrote in Scala. This is where MXNet shines better (though its Scala API could do with more work).
Theano is perhaps a bit faster and eats up less memory than TensorFlow on a given GPU, perhaps due to element-wise ops. Tensorflow wins for multi-GPU and “compilation” time.
If the team looking to use Apache Spark is not used to debug and tweak settings for jobs to ensure maximum optimizations, it can be frustrating. However, the documentation and the support of the community on the internet can help resolve most issues. Moreover, it is highly configurable and it integrates with different tools (eg: it can be used by dbt core), which increase the scenarios where it can be used
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Community support for TensorFlow is great. There's a huge community that truly loves the platform and there are many examples of development in TensorFlow. Often, when a new good technique is published, there will be a TensorFlow implementation not long after. This makes it quick to ally the latest techniques from academia straight to production-grade systems. Tooling around TensorFlow is also good. TensorBoard has been such a useful tool, I can't imagine how hard it would be to debug a deep neural network gone wrong without TensorBoard.
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.
Keras is built on top of TensorFlow, but it is much simpler to use and more Python style friendly, so if you don't want to focus on too many details or control and not focus on some advanced features, Keras is one of the best options, but as far as if you want to dig into more, for sure TensorFlow is the right choice