FAQ

FAQ

General Use

How does MIPAR compare to other software?
The specifics of this comparison very much depend on the user's particular problem. In general, there are 4 key areas to consider:

Functionality:  Users are often able to automate the detection of challenging features that that others cannot. With the introduction of deep learning capabilities in MIPAR, users can automate problems that have never before been solved.

User Interface: MIPAR is built around automated detection, which means we optimized our user interface to facilitate faster and more robust algorithm detection without having to code. We packaged powerful tools such as deep learning networks into a friendly graphical environment that new users can learn quickly.

Integration: Users are able to move from prototyping an algorithm, to validating, to deploying at scale using the same software. We worked hard to avoid the need to learn multiple toolboxes and languages to solve a problem.

Expert Support: Everyone on the MIPAR expert support team has a PhD in the field they support. If you need our input we are just a email, phone call, or live chat away. Our support is never outsourced and we provide industry leading project turnaround times. There's no question that some open source softwares have worldwide communities, but it can be tough to get access to those experts in a timely fashion.

How long does it take to learn MIPAR?
As with most things, it depends on the problem and the experience of the user. More experienced image analysts can expect to pick up the user interface within a day. Users with minimal experience are often comfortable in 1-2 weeks. We encourage all users to visit the MIPAR Academy, our free online learning platform.

What kinds of images can MIPAR handle?
We open all the basics (TIF, JPEG, PNG, etc.), and then over 150 additional formats through use of the Bio-Formats library. More information can be found in our user manual here.

Does MIPAR handle 3D data?
Yes, we offer an optional 3D Extension, which allows for the alignment, segmentation, visualization, and quantification of volumetric datasets. These are typically FIB/SEM serial section or microCT datasets, but any volume of data which can be sliced can be imported into MIPAR.  More information can be found in our user manual here.

Recipe Building

How do I get recipes?
Users can develop their own recipes, download from our free store, or request one from our experts. Those looking to build their own are encourage to visit MIPAR Academy, our free online learning platform.

Does MIPAR use any machine learning?
Yes, MIPAR uses state-of-the-art deep learning technology to develop AI solutions to extremely challenging feature detection (i.e., segmentation) problems. Models can be trained without any coding required, simply by tracing features of interest. Here are some example results. More information can be found in our user manual here.

MIPAR's approach to deep learning compares in the following ways to other machine learning solutions:
  • Achieve more accurate feature selection with less training data
  • Substantially accelerated annotation for training with semi-automated approaches
  • Produce more robust models to accommodate more variation, feature types, and magnifications in one solution
  • Easily append new images to improve model accuracy and robustness as needed
  • Post-process deep learning predictions to optimize detection accuracy and customize classification to your problem

What kind of measurements can MIPAR make?
Numerous measurements are available, with categories including size, shape and location. Different measurement classes include global (per image), feature (per feature), and local (per pixel). Full histrograms/distributions can be plotted and formal reports can be generated.

How do I make paired measurements? (e.g., count features inside others)
Set the recipe step which selects the child features as the Companion Image (Memory > Set Companion Image), then call the step which selects the parent features and set as a Layer if needed (if these features are selected prior to the child features, set the parent features as a memory image, then call this image after setting child features as Companion Image). Then go to Measure Features > choose child layer. Check "Companion Features" in the Based on Companion panel. This will allow you to count child features within each parent.

If the "child image" set as Companion was instead a grayscale image, then the "Intensity Mean" and "Intensity StdDev" measurements would be available, allowing you to measure something like average intensity per parent feature.

An example recipe setup is available here.

Technical Support

What operating systems does MIPAR run on?
Windows (64-bit only)
  • Windows 11
  • Windows 10
  • Windows 7 SP 1
  • Windows Server 2019
  • Windows Server 2016
Mac
  • macOS Monterey (12.0)
  • macOS Big Sur (11.0)
  • macOS Catalina (10.15.6 or later)

What computer specs are needed to run MIPAR?
Minimum System Requirements
  • CPU: Dual-core modern Intel / AMD
  • GPU: Intel HD 4000 level graphics
  • Storage: HDD 5400 RPM
  • RAM: 4 GB
Recommended System Specs
  • CPU: 8+ core modern Intel / AMD
  • GPU: NVIDIA GeForce GTX 1060 or better
  • Storage: SSD (Solid State Drive)
  • RAM: 16+ GB

Why do the fonts look jumbled?
Fonts may not have been installed properly. Try the following:
  • Navigate to C:\ProgramFiles\MIPAR\fonts
  • Select all .ttf and .otf files
  • Right click > Install (or “Install for all users” if available)
  • Restart MIPAR