The use of this software does not necessarily follow that it is "globally better" than others. In the department we have used this and others with similar characteristics, given that, as previously indicated, all the software has advantages and weaknesses with respect to other …
Based on my limited experience and use, and therefore limited global knowledge of the software, I would recommend it especially if the data that will be used as inputs to the model has previously worked on a spreadsheet such as Excel. I would also recommend it to analyze problems of medium and small size. Given the experience I have had when I have used it with large problems, there have been noticeable decreases in the speed of response (which are not associated with the size of the system of equations involved in the calculation). Excellent for processing linear programming models.
It is perfectly suited for statistical analyses, but I would not recommend JMP for users who do not have a statistical background. As previously stated, the learning curve is exceptionally steep, and I think that it would prove to be too steep for those without statistical background/knowledge
JMP is designed from the ground-up to be a tool for analysts who do not have PhDs in Statistics without in anyway "dumbing down" the level of statistical analysis applied. In fact, JMP operationalizes the most advanced statistical methods. JMP's design is centred on the JMP data table and dialog boxes. It is data focused not jargon-focussed. So, unlike other software where you must choose the correct statistical method (eg. contingency, ANOVA, linear regression, etc.), with JMP you simply assign the columns in a dialog into roles in the analysis and it chooses the correct statistical method. It's a small thing but it reflects the thinking of the developers: analysts know their data and should only have to think about their data. Analyses should flow from there.
JMP makes most things interactive and visual. This makes analyses dynamic and engaging and obviates the complete dependence on understanding p-values and other statistical concepts(though they are all there) that are often found to be foreign or intimidating.
One of the best examples of this is JMP's profiler. Rather than looking at static figures in a spreadsheet, or a series of formulas, JMP profiles the formulas interactively. You can monitor the effect of changing factors (Xs) and see how they interact with other factors and the responses. You can also specify desirability (maximize, maximize, match-target) and their relative importances to find factor settings that are optimal. I have spent many lengthy meetings working with the profiler to review design and process options with never a dull moment.
The design of experiments (DOE) platform is simply outstanding and, in fact, the principal developers of it have won several awards. Over the last 15 years, using methods broadly known as an "exchange algorithm," JMP can create designs that are far more flexible than conventional designs. This means, for example, that you can create a design with just the interactions that are of interest; you can selectively choose those interactions that are not of interest and drop collecting their associated combinations.
Classical designs are rigid. For example, a Box-Benhken or other response surface design can have only continuous factors. What if you want to investigate these continuous factors along with other categorical factors such as different categorical variables such as materials or different furnace designs and look at the interaction among all factors? This common scenario cannot be handled with conventional designs but are easily accommodated with JMP's Custom DOE platform.
The whole point of DOE is to be able to look at multiple effects comprehensively but determine each one's influence in near or complete isolation. The custom design platform, because it produces uniques designs, provides the means to evaluate just how isolated the effects are. This can be done before collecting data because this important property of the DOE is a function of the design, not the data. By evaluating these graphical reports of the quality of the design, the analyst can make adjustments, adding or reducing runs, to optimize cost, effort and expected learnings.
Over the last number of releases of JMP, which appear about every 18 months now, they have skipped the dialog boxes to direct, drag-and-drop analyses for building graphs and tables as well as Statistical Process Control Charts. Interactivity such as this allows analysts to "be in the moment." As with all aspects of JMP, they are thinking of their subject matter without the cumbersomeness associated with having to think about statistical methods. It's rather like a CEO thinking about growing the business without having to think about every nuance and intricacy of accounting. The statistical thinking is burned into the design of JMP.
Without data analysis is not possible. Getting data into a situation where it can be analyzed can be a major hassle. JMP can pull data from a variety of sources including Excel spreadsheets, CSV, direct data feeds and databases via ODBC. Once the data is in JMP it has all the expected data manipulation capabilities to form it for analysis.
Back in 2000 JMP added a scripting language (JMP Scripting Language or JSL for short) to JMP. With JSL you can automate routine analyses without any coding, you can add specific analyses that JMP does not do out of the box and you can create entire analytical systems and workflows. We have done all three. For example, one consumer products company we are working with now has a need for a variant of a popular non-parametric analysis that they have employed for years. This method will be found in one of the menus and appear as if it were part of JMP to begin with. As for large systems, we have written some that are tens of thousands of lines that take the form of virtual labs and process control systems among others.
JSL applications can be bundled and distributed as JMP Add-ins which make it really easy for users to add to their JMP installation. All they need to do is double-click on the add-in file and it's installed. Pharmaceutical companies and others who are regulated or simply want to control the JMP environment can lock-down JMP's installation and prevent users from adding or changing functionality. Here, add-ins can be distributed from a central location that is authorized and protected to users world-wide.
JMP's technical support is second to none. They take questions by phone and email. I usually send email knowing that I'll get an informed response within 24 hours and if they cannot resolve a problem they proactively keep you informed about what is being done to resolve the issue or answer your question.
On the few occasions when I have used it to deal with problems of optimization of relatively large parameters (with a large number of restrictions and decision variables), the program has been slower, not substantially but slower, than programs such as the WinQsb, even when the latter runs on 32-bit machines and not 64. That has caught my attention, even though it is not a real problem for the uses I give to the program.
Given my partial function as a university professor, it has been much more effective and practical to use other software, due to the limited options that the educational license associated with the software has.
In general JMP is much better fit for a general "data mining" type application. If you want a specific statistics based toolbox, (meaning you just want to run some predetermined test, like testing for a different proportion) then JMP works, but is not the best. JMP is much more suited to taking a data set and starting from "square 1" and exploring it through a range of analytics.
The CPK (process capability) module output is shockingly poor in JMP. This sticks out because, while as a rule everything in JMP is very visual and presentable, the CPK graph is a single-line-on-grey-background drawing. It is not intuitive, and really doesn't tell the story. (This is in contrast with a capability graph in Minitab, which is intuitive and tells a story right off.) This is also the case with the "guage study" output, used for mulivary analysis in a Six Sigma project. It is not intuitive and you need to do a lot of tweaking to make the graph tell you the story right off. I have given this feedback to JMP, and it is possible that it will be addressed in future versions.
I've never heard of JMP allowing floating licenses in a company. This will ALWAYS be a huge sticking point for small to middle size companies, that don't have teams people dedicated to analytics all day. If every person that would do problem solving needs his/her own seat, the cost can be prohibitive. (It gets cheaper by the seat as you add licenses, but for a small company that might get no more than 5 users, it is still a hard sell.)
JMP has been good at releasing updates and adding new features and their support is good. Analytics is quick and you don't need scripting/programming experience. It has been used organization wide, and works well in that respect. Open source means that there are concerns regarding timely support. Cheap licensing and easy to maintain.
The overall usability of JMP is extremely good. What I really love about it is its ability to be useable for novices who have no coding experience, which is not the case with most other, similar, programs. It can output a fast and easy analysis without too much prior coding or statistical knowledge.
Support is great and give ease of contact, rapid response, and willingness to 'stick to the task' until resolution or acknowledgement that the problem would have to be resolved in a future build. Basically, one gets the very real sense that another human being is sensitive to your problems - great or small.
We believe in building the models in Excel. A limitation with Excel is that Excel Solver can not take more than 200 decision variables with multiple constraints. It is cheap in terms of license and maintenance fees against other softwares which are available in the market.
It is great because it has UI menus but it costs money whereas the other programs are free. That makes it ideal for beginners but I think that RStudio and Python are going to make someone a lot more marketable for future opportunities since most companies won't pay for the software when there is a great free option.
- It has allowed finding ways to optimize (minimizing costs or times) the field processes involved in various projects.
It has even allowed, in specific cases where it was used for that purpose, to optimize the allocation of resources (people) to work in different jobs that present weekly variations of the activity that these people must perform.
It has allowed the sensitivity analysis of projects to changes in the decision variables related to them, which, and in very dynamic and changing environments, resulted in substantial decreases in money losses.
ROI: Even if the cost can be high, the insights you get out of the tool would definitely be much more valuable than the actual cost of the software. In my case, most of the results of your analysis were shown to the client, who was blown away, making the money spent well worth for us.
Potential negative: If you are not sure your team will use it, there's a chance you will just waste money. Sometimes the IT department (usually) tries to deploy a better tool for the entire organization but they keep using the old tool they are used too (most likely MS Excel).