TrustRadius Insights for TensorFlow are summaries of user sentiment data from TrustRadius reviews and, when necessary, third party data sources.
Pros
Clear Documentation: Many users have found the documentation for multi-GPU support in TensorFlow to be simple and clear. This has been helpful for users who are new to working with multiple GPUs, as it allows them to easily understand and implement this feature.
Powerful Visualization Tools: Reviewers appreciate the ability to visualize the graph using TensorBoard, as it helps them understand and navigate through complex models. The interactive nature of TensorBoard also allows users to log events and monitor output over time, providing a convenient way to perform quick sanity checks.
Active Community Support: Users highly value the active community surrounding TensorFlow, which has helped them learn faster and overcome obstacles in their development work. The availability of readily available answers and top-notch documentation from the community has been instrumental in ensuring a smooth experience while working with TensorFlow.
Used it in the past with Keras to finetune and deploy a NER model. Keras is a nice library on top of TensorFlow but it is very opinionated, more than PyTorch for example.You can use TensorFlow without Keras to develop your model but in such as case it makes more sense to use PyTorch/Jax. The big advantage of TensorFlow is also the serving, with TensorFlow serving it is quite easy to deploy the model (literally a matters of minutes with reasonable performance), however performance wise it is not always the best, I often get better throughput with ONNX conversion of the model then deployment with TensorRT at then expense of more intermediary steps (tradeoff depending on the load expected for the model). I think TensorFlow got a bad wrap in the community due to the handling of the transition from version 1 to version 2 that was a bit chaotic, similarly when Google dropt the support of TensorFlow-Swift fears of "yet another project that Google will kill" intensified, but TensorFlow 2 can still be a good choice for a lot of models especially BERT based (NER, QA, etc.)
Pros
Model serving
Keras
Easy install/docker images
Lot of open source projects based on it (RL/GNN/etc.)
Lot of pre-finetuned BERT based models
Cons
Too much abstraction
Conversion of PyTorch models not that obvious sometimes
Likelihood to Recommend
Well suited: - pretrained BERT-base model ready to deploy - IoT with TensorFlow lite and the edge TPUs - Domain where datasets are available in Huggingface (e.g., medical model)
Less well suited: - Small project due to the complexity/less resource to learn - New model tends to use PyTorch
Tensorflow is a good intermediate level for building neural networks, or more generally, differentiable programming. Tensorflow v1 and Tensorflow v2 have very significant architectural differences: v1 is about defining a computational graph, upon which operations are performed (like "do one step of backprop" or "batch-evaluate on this data"), while v2 does more computations "live" and is built more like, essentially, a heavy-duty calculator with a differentiable history. v2 is tightly integrated with Keras, so if you intend to use industry-standard layers and architectures from Keras, then Tensorflow is probably your best bet. Both v1 and v2 allow you to define your own layers, or do other differentiable programming tasks; for instance, differentiable physics engines have been written in Tensorflow.
Pros
Integrating with Keras.
Working on CPU/GPU/TPU neutrally.
Exporting to TFLite for browsers or edge computing.
Cons
The massive changes between v1 and v2 can be confusing when looking at examples online.
TensorFlow is losing market ground to PyTorch and JAX.
Likelihood to Recommend
If you're doing NN training, in particular, or if you have reasons why you might need customs layers or unusual architectures, then TF is probably your best bet. TF is also basically your only bet if you're planning on using any TPU edge devices, such as the Coral.
TensorFlow is used as a development platform for deep learning algorithms, in particular for: 1. Recommendations: selecting the best templates to recommend to users via email in the various countries the company has a market in, over 100 languages supported, 2. User feedback classification: when users provide feedback, natural language processing algorithms implemented in TensorFlow and Keras are used to classify issues so that stakeholders can identify the major issues with a product/product release, 3. Learning-to-rank for search: there is some development on improving search results by switching to deep learning algorithms from a gradient boosting one, and TensorFlow provides that capability, and 4. Computer vision: some experimentation performed on object detection and image classification.
Pros
TensorFlow is fairly easy to use, with adequate tutorials to get any user started quickly.
Tooling around TensorFlow, such as TensorBoard, is a gold standard: it has made the training and debugging process so much easier compared to most other deep learning platforms.
Community support for TensorFlow is very good. If there is a problem, there usually is an answer by just a little Googling. Also the documentation for TensorFlow is often top notch.
Cons
Prior to TensorFlow 2.0, setting up data ingestion for TensorFlow can be a huge pain. So much so that TensorFlow Lite and alternatives such as Keras make it more palatable. Things are changing with TensorFlow 2.0 though.
Some error messages from TensorFlow can be quite difficult to understand. For instance, a recent error using the dot product layer in TensorFlow 2.0 made it seem like there was a problem with data ingestion, but by downgrading to TensorFlow 1.14.0, the problem disappears.
Tooling with Bazel (our choice for a build tool) in our monorepo is a bit of a nightmare, partly because Bazel has poor Python support. However, we were able to integrate PyTorch easily with Bazel, but not TensorFlow.
Would love to have better bindings with the JVM, rather than just Python, considering that many companies have a JVM-based stack, making it easier to integrate.
Likelihood to Recommend
TensorFlow is great for most deep learning purposes. This is especially true in two domains: 1. Computer vision: image classification, object detection and image generation via generative adversarial networks 2. Natural language processing: text classification and generation.
The good community support often means that a lot of off-the-shelf models can be used to prove a concept or test an idea quickly. That, and Google's promotion of Colab means that ideas can be shared quite freely. Training, visualizing and debugging models is very easy in TensorFlow, compared to other platforms (especially the good old Caffe days).
In terms of productionizing, it's a bit of a mixed bag. In our case, most of our feature building is performed via Apache Spark. This means having to convert Parquet (columnar optimized) files to a TensorFlow friendly format i.e., protobufs. The lack of good JVM bindings mean that our projects end up being a mix of Python and Scala. This makes it hard to reuse some of the tooling and support we wrote in Scala. This is where MXNet shines better (though its Scala API could do with more work).
TensorFlow is the best deep learning library for visualization, training and tuning the model with a large dataset. We are using TensorFlow in the research and development department for the training of natural language, image processing and for the application of specific predictive models. It is also used by the production department to support and host the trained models at the application level.
Pros
Detailed and more functional implementation of various algorithms.
Great visualization under TensorFlow board for training models.
Multiple GPU support and availability of TPU to train large models.
Regular updates.
Large user community.
Cons
Performance issues on a low scale system.
Complex to debug for multi GPU training of a large model.
It is not easy to use for new developers compared to other libraries.
Implementation for complex architecture is difficult.
Likelihood to Recommend
TensorFlow is well-suited for complex model training with a large dataset using multiple GPU's and provides training time mode visualization for fast debugging of the architecture. If you are doing a proof of concept for new architecture then it would not be a good choice considering implementation complexity and development time.
VU
Verified User
Employee in Research & Development (501-1000 employees)
We use TensorFlow for machine learning implementations. Primarily for predictive analysis and recommendation engines. It is being used at an organization level. Our objective is to use a large amount of publicly available data and make meaningful insights from it. It has helped us make better predictions and save costs. We also use it for time series analysis to make predictions in the equity market. TensorFlow has been a powerful and easy to deploy tool for various algorithms.
Pros
Support for many libraries and programming languages.
Ability to use GPU and TPU - hence faster execution.
Low effort in getting started in development, hence ease of learning.
Cons
Graphic interface to create layers can help beginners.
Detailed tutorials on what goes behind the scenes in each layer. Currently, the tutorials don't focus on that.
Better support to integrate with files on the cloud.
Likelihood to Recommend
Best suited for deployment on the cloud with the subscription-based model for execution infrastructure. For startups or for companies that do not have a strong data science staff, learning Tensorflow is easy because of the libraries and online tutorials availability.
It can be avoided when your development stack is Microsoft, as using Azure may provide better integration. Also, if the work requires detailed customization of the algorithm, it may be easier to work directly with Python code and TensorFlow may not help.
Tensorflow (TF) is one of the Machine Learning (ML) libraries at LinkedIn. The necessary plumbing needed to deploy, maintain and monitor a TF project is under active development. It is currently used for building Wide and Deep Neural Networks, where training data is in the order of millions. However, in production, tree-based models or logistic regression are still popular.
Pros
A vast library of functions for all kinds of tasks - Text, Images, Tabular, Video etc.
Amazing community helps developers obtain knowledge faster and get unblocked in this active development space.
Integration of high-level libraries like Keras and Estimators make it really simple for a beginner to get started with neural network based models.
Cons
Profiling the TensorFlow (TF) graph for performance optimizations is still a challenge due to lack of proper documentation.
In our experiments with using TF-GPU on Kubernetes, we see constant memory issues causing nodes to crash.
There is still a significant learning curve and it's not as simple as other popular Python libraries. Having said that, the TF team and community are actively working on this problem.
Likelihood to Recommend
Whenever the problem has the demand for a neural networks based solution, Tensorflow (TF) is a great fit.
The tf.dataset API makes it really simple to create complex data pipelines in a few lines of code.
tf.estimators API abstracts all the complex computation graph creation logic making it very simple to get started.
Eager execution makes it simple to develop a TF graph as debugging the code would be like any other imperative Python program.
TF abstracts all the complexities of scaling it to multiple machines. It has various code and data distribution algorithms ready to use.
Projects like TensorBoard make monitoring the training process really easy. It also gives the ability to view embeddings without any extra code. Their What-If is extremely useful for poking and understanding a black box model. It also has tools to visualize data to quickly check for anomalies.
TF Autograph aims to covert any normal Python code into a distributed program which is quite handy to scale an existing code base.
Obviously, TensorFlow is a great opportunity for everyone who is interested in ML and DL area. We wanted to use TensorFlow in our company, majorly focusing on helping the Operation and Planning domains. Also, it is used as POC for Clearance domain. The purpose is quite similar, by using the DL Technics, through injecting large amount of historical data, learning the patterns, predicting the future trend or advice the best candidate suggestions. Some examples include Commercial Invoice recognition and classification, HS Code prediction, Transportation Time Prediction, Volume Density Prediction, Dimension Prediction.
Pros
Data pipeline implementation is quite good, loading large amounts of data and pre-process it in an efficient way is no more issue for us
It supports all major DL algorithms and network layouts such as ConvNets, RNN, LSTMs, Word2Vec, and even the latest transformer architecture
The abstraction for the device is perfectly done and its support seamlessly for multiple GPU and even TPU will bring a lot of performance gain for enterprise scoped solution while still keep the flexibility
The TensorBoard is amazing. I haven't seen a similar thing in other frameworks on the market. It allows us to quickly understand and debug the model with the info visualization which makes understanding much better
A very supportive community, which is the key for sharing the ideas and find the quick and best solutions
Cons
TensorFlow has its own model and terminology, which is not quite the same as the normal Python styled other frameworks, so in order to master it, the learning curve is a little bit steep, and as a by-product of the fast iteration and release, sometimes the documentation is not quite catching up
TensoFlow is based on Design Model Then Run Model concept, which means the model itself is static. Maybe it could also borrow some ideas from PyTorch, which is more intuitive and supports dynamic model building
Likelihood to Recommend
I think TensorFlow is very good for people who want to dive deeper and have full control of the NN layer of details. It is a production-ready design and supports the distributed environment, so it is very good for mature and enterprise production. If the user is looking for reusing some standard models and wants to do some quick POC without too much in-depth understanding of the NN, then maybe something like Keras would be the better wrapper to begin with.
VU
Verified User
Strategist in Information Technology (10,001+ employees)
Our organization was using it when it was 6 months old. It's a open source software by Google pretty robust. We use this AI to solve our healthcare problems when it comes to patient monitoring, appointment cancellation, scheduling, and registration.
Pros
Multi-GPU support. It works; the documentation is simple and clear. You’ll still need to figure out how to divide and conquer your problem, but isn’t that part of the fun?
Training across distributed resources (i.e., cloud). As of v0.8, distributed training is supported.
Queues for putting operations like data loading and preprocessing on the graph.
Visualize the graph itself using TensorBoard. When building and debugging new models, it is easy to get lost in the weeds. For me, holding mental context for a new framework and model I’m building to solve a hard problem is already pretty taxing, so it can be really helpful to inspect a totally different representation of a model; the TensorBoard graph visualization is great for this.
Logging events interactively with TensorBoard. In UNIX/Linux, I like to use tail -f to monitor the output of tasks at the command line and do quick sanity checks. Logging events in TensorFlow allows me to do the same thing, by emitting events and summaries from the graph and then monitoring output over time via TensorBoard (e.g., learning rate, loss values, train/test accuracy).
Model checkpointing. Train a model for a while. Stop to evaluate it. Reload from checkpoint, keep training.
Performance and GPU memory usage are similar to Theano and everything else that uses CUDNN. Most of the performance complaints in the earlier releases appear to have been due to using CUDNNv2, so TensorFlow v0.8 (using CUDNNv4) is much improved in this regard.
Cons
RNNs are still a bit lacking, compared to Theano.
Cannot handle sequence inputs
Theano is perhaps a bit faster and eats up less memory than TensorFlow on a given GPU, perhaps due to element-wise ops. Tensorflow wins for multi-GPU and “compilation” time.
Likelihood to Recommend
Tensor Flow can be used for training the Machine Learning model and mobile application that utilizes trained model and the built-in camera for medical images analysis. It's improving imaging analytics and pathology. Machine learning can supplement the skills of human radiologists by identifying subtler changes in imaging scans more quickly, potentially leading to earlier and more accurate diagnoses.
I personally use TensorFlow for my work only. I used this software for about a year in my college during a research project on deep learning. Most of the time, I used this tool to develop a deep learning algorithm which operates around image and videos. Some of the examples where I have used this tool is image classification, video classification, etc.
Pros
TensorFlow is the best when you are doing some work around deep learning
You can also use this for natural language processing as it has lot of inbuilt functionality for this.
It also can be used to clean up the data and for data processing, as it provides lots of functionality for that too.
Cons
It would be much better if they could provide good documentation and easy ways to understand concepts.
It is difficult to understand the concept behind for example, Tensor Graph, which takes a lot of time.
As you have to write everything, it is time consuming to write the implementation of whole neural network. It would be better if they can provide some wrapper library to make things easier.
Likelihood to Recommend
There are lots of scenarios where TensorFlow can be used efficiently. One of them is image processing and video processing that include classification, recognition, etc. It can also be used for natural language processing and building chatbots. As TensorFlow has LSTM in built, it will be easy to use this for doing NLP stuff.