Immersive Image Visualization Webinar

Webinar Q&A
Does the VR headset support volume rendering in addition to surface rendering and the other rendering modes shown in this webinar?

Yes, it does support volume rendering. We recognize that diverse datasets exists and have provided a number of different rendering modes, allowing you to view your data in the most suitable mode.

Can the HTC Vive be connected to any 3D software?

No. While there is an ever-growing suite of programs that support VR it does require dedicated software development. This development work has already been performed in our software so you can explore your data sets in VR.

Which VR head sets are currently supported?

Currently, we support the HTC Vive.

Can the software load grayscale images and volume render them in VR?

Absolutely, that is something our software can handle.


What image analyses can be done with DRVision software?

Right now we have 15 image analysis protocols available including Cell Counting and Tracking, Nuclei Counting and Tracking, Cell Proliferation, Calcium Oscillation, Particle Tracking, Cell Colony Analysis, and Wound Healing. For more detail on our 2D image analyses visit the SVCell product page. On the 19th of April we will be releasing our first 3D image analysis protocol which was previewed in this webinar: neuron detection and analysis.


Does the software support 4D data sets?

Yes, the software can load 4D datasets.


Are there plans to include more 3D and 4D analysis tools? If so what is the timeline for this development?

In April 2017 we will launch the first dedicated 3D tool (neuron detection and analysis). We do plan to include additional 3D and 4D analysis tools but the details are still being discussed. We would like to hear from the community to ensure we create in-demand tools, so please let us know if you are looking for a specific tool.

Will the machine learning cell classification work on 2D or 3D data?

Our focus for the April release is on 3D neuron classification -  that is where we have focused the majority of our in-house testing and optimization. However, the classifier also works out of the box with any 2D or 3D objects.

What role do you see machine learning playing in this and similar software in the future?

Our goal for this upcoming release has been to automatically detect and classify cells and cell states. We have worked hard to automate that process for complex life sciences research questions and analyses and we believe that will be important for a lot of researchers or even clinicians. Using machine learning tools such as our cell classifier, researchers may be able to identify cell profiles associated with specific diseases or monitor disease progression.

Does the software determine endpoints like "number of vesicles per cell" or "speed of vesicles in cell that is moving"?

Yes, all of the recipes available for the software are flexible and can be combined using our powerful measurement and data association tools. For this example, we could apply both the particle tracking and cell tracking recipes to the dataset and create a relationship between cells and moving particles. This can easily and accurately create measurements such as “how many particles on average are there within this moving cell”.

How do you usually work with customers with special requirements or special recipes? Is this something considered ‘add-on’ and with an extra charge?

We are very flexible at DRVision. We are proud of our software and work hard to meet the needs of our customers and the scientific community. Between each release, we compile feature lists based on community requests and prioritize those to develop the next set of features. If a feature is something the wider community is looking for it could be included in the software as part of our own development cycle at no extra charge to you. Additionally, we also offer a fee-for-service image analysis (you send us the datasets and our trained experts will analyze them and send you the results) or a contract software development services (you tell us about the scope and goals of the project and we can develop software to fit your needs). We are very flexible and eager to bring the best technology and analysis tools to our customers, please get in contact with us and we’ll happily discuss specific requests in more detail.

Are there plans to enable users to run their own plugins within the software?

In the past, we have incorporated custom DLL loading for plugins but we do not have a plugin system for this release. This is something we would love to implement if there is a demand for it, so reach out and let us know if that is a priority for you!


How do we get a trial of the software?

Currently, you can download a free trial of SVCell (our 2D analysis software) from our download page. We also have a closed beta program for our latest 3D software. While is program is currently full, feel free to send us an email if you want to be put on the wait list.

What are the sizes of data that are imported into the software?

Currently, data sets of up to 3GBs are well support for VR. We are actively working towards supporting larger files as we recognize that some datasets, such as light sheet and EM datasets, can be quite large.

Is this 3GB a hard limit?

No, it largely depends on your GPU as the bottleneck for large date sizes. Our software can load and process very large datasets (typically up to 2048 x 2048 x 2048 per time point) but you’ll need a GPU with at least 4GB of VRAM. Larger datasets can be loaded, but may render at a lower resolution depending on your current hardware configuration.

What is the price for the software license?

We have not yet have a fixed price for the tools we have showcased in this webinar.


Can the software use NVIDIA CUDA to speed up?

Not at the moment. CUDA is NVIDIA’s proprietary general purpose GPU programming language and can offer significant speed and performance boosts for some image analysis tasks.

A special thanks to: Gabriel Martins at the Gulbenkian Institute for Science in Lisbon and the BigNeuron Project at the Allen Institute for Brain Science for allowing us to use their images.