Free access supported by contributions and sponsoring — share your knowledge or support us financially
Search / Compare / Validate Lab equipment & Methods

Market Availability & Pricing

The Mathematica 11 software by Wolfram is no longer actively commercialized, as the latest version is Mathematica 14.2.1 released in April 2025. While Mathematica 11 is not officially marked as discontinued, it is not listed among Wolfram's current product offerings. For the most up-to-date features and support, it is recommended to consider the latest version of Mathematica.

Need Operating Instructions, SDS, or distributor details? Just ask our AI Agent.

Is this product still available?

Get pricing insights and sourcing options

The spelling variants listed below correspond to different ways the product may be referred to in scientific literature.
These variants have been automatically detected by our extraction engine, which groups similar formulations based on semantic similarity.

Product FAQ

112 protocols using «mathematica 11»

1

Quantitative Analysis of Experimental Data

2025
Data were analyzed using custom-written scripts in Mathematica 11 (Wolfram Research) and MATLAB R2018a (Mathworks). Statistical tests, exact values of n, and statistical values for all analyses are listed in Table S1.
+ Open protocol
+ Expand Check if the same lab product or an alternative is used in the 5 most similar protocols
2

Dimethoate Sorption and Desorption in Soil

2025
The experimental data are presented statistically as mean values of triplicate analyses. Statistical comparisons were performed using Statistica® 14.0.0 software (TIBCO Software, version 14.0.0, Palo Alto, CA, USA), with p < 0.05 assumed to be significant. The normality of the distribution of the data obtained was checked by applying the Kolmogorov–Smirnov test. A correlation matrix was used to determine the existing dependence between the investigated soil characteristic and the sorption/desorption parameters of dimethoate. Principal component analysis (PCA) was also carried out with the aim of highlighting the predominant physicochemical properties of the soil that significantly influence the process of dimethoate sorption/desorption. To further model the relationship between predictor variables and the parameters describing the process of dimethoate sorption/desorption, a multiple regression analysis was performed to propose predictive models for the sorption and desorption of dimethoate. The sorption/desorption parameters were estimated by nonlinear estimation using Wolfram Research Mathematica® 11.0 software (Wolfram Research Co., version 11.0, Champaign, IL, USA). The accuracy of the models was checked by comparing the experimental data with the predicted values using the coefficient of multiple determination R2, Scaled Root Mean Squared Error (SRMSE), and χ2 test error.
+ Open protocol
+ Expand Check if the same lab product or an alternative is used in the 5 most similar protocols
3

Fabrication of Microfluidic Mini-Colon Model

2024
The microfluidic device used for mini-colon cultures was designed using Clewin 3.1 (Phoenix Software) and fabricated as previously described9 (link). It was composed of three main compartments: (1) a hydrogel chamber for cell growth in the centre; (2) two basal medium reservoirs flanking the hydrogel compartment; and (3) inlet and outlet channels for luminal perfusion9 (link). An extracellular matrix containing 80% (v/v) type I collagen (5 mg ml−1, Reprocell, KKN-IAC-50) and 20% (v/v) growth–factor-reduced Matrigel was loaded into the hydrogel compartment. The microchannels constituting the mini-colon architecture within the hydrogel were designed using Adobe Illustrator CC 2019 and Wolfram Mathematica 11.3. They were then read by PALM RoboSoftware 4.6 (Zeiss) and ablated using a nanosecond laser system (1 ns pulses, 100 Hz frequency, 355 nm; PALM Micro-Beam laser microdissection system, Zeiss). The dimensions of the mini-colon architecture were described previously9 (link). A detailed description of all the key steps required for the generation and maintenance of mini-guts is available at Protocol Exchange (10.21203/rs.3.pex-903/v1)38 .
+ Open protocol
+ Expand Check if the same lab product or an alternative is used in the 5 most similar protocols
4

Comparative 3D Nuclear Segmentation Efficacy

2024
We compared the nuclear segmentation efficacy of the pre-trained, deep-learning-based segmentation network with two conventional open-access software packages based on 3D watershed, which were previously published. We used 3D confocal stacks from untreated co-cultured KP4 spheroids with a diameter of 300 μm and processed them initially using Fiji. Pre-processing included the cropping of images, correcting the background, and exporting as tiff files. Conventional method 1 included an image analysis pipeline based on Mathematica (Mathematica 11.1, Wolfram Research Inc.) [32 (link)]. The median filter range was set to 3 pixels, the local threshold range was set to 10 pixels, and the hole filling range was set to 1 pixel. For seed detection, Laplace of Gaussian (LoG) was chosen with a seed range between 9 and 25 pixels. The other parameters were used by default. The results of the initial and final segmentation and the detected seed positions for different planes in xy and zy were displayed in real-time to increase the accuracy of the segmentation results. The post-segmentation data were exported as different 3D stacks in tiff format and as an XLSX file with quantitative results. For conventional method 2, we utilized OpenSegSPIM, an automated quantitative analysis tool for 3D microscopy data [33 (link)]. The parameter settings for the median filter and noise removal size were set automatically after the average nucleus size was measured using the built-in module. For the detection step, we selected shape-based detection with a sensitivity of 1 and started the segmentation process using shape-based segmentation. The segmentation masks were exported as different 3D stacks in tiff format and as an Excel sheet with quantitative results. The CNN segmentation was performed by a deep-learning model presented by Scherr et al. [34 (link)]. The structure of the model and the data processing routines were adapted to allow for a direct 3D processing of the data. For training and inference, the images were sliced into patches of size 32 × 128 × 128 px3 (z, y, x). Training was performed with the ranger optimizer, an initial learning rate of 1 × 10− 3 and a batch size of three. Other training parameters were consistent with the ones used in [34 (link)]. The development of a substantial deep-learning training dataset for 3D data presents significant challenges. The extra dimension, relative to 2D data, substantially lengthens labeling time. Additionally, distinguishing nuclei in high-density areas is complex and labor-intensive, owing to reliance on 2D visualizations. Therefore, four synthetic 3D spheroid images, generated as described in [35 (link)] were used for model training. Quantitative analysis of the segmentation performance was conducted with the segmentation and detection measures used in the cell tracking challenge [36 (link)]. As ground truth, an image patch of size 32 × 128 × 128 px3 was manually annotated by a biological expert. The image patch was extracted from the 5 images.
+ Open protocol
+ Expand Check if the same lab product or an alternative is used in the 5 most similar protocols
5

Morphological Transformation of Human Craniofacial Structures

2024
To investigate the parallels between morphological alterations in mice and their potential manifestations in humans, we conducted a detailed transformation of polygon data representing human craniofacial structures. The original image was extracted from full-body MRI scans (from the BodyParts3D dataset, a resource developed by the Database Center for Life Science in Tokyo, Japan) and transformed as specified below. Our focus was on the human skull, which was dissected into 53 high-resolution segments encompassing an array of teeth, bones, and ligaments.
For the transformation process, we employed the capabilities of Mathematica 11.0, (developed by Wolfram Research in Illinois, USA) using custom-written code (available at https://zenodo.org/records/10363659). Our transformation technique involved a non-linear 3D transformation algorithm conceptualized as a three-dimensional ‘magnifying glass.’ This method was adapted from previously published code79 and featured a radius of 3 cm centered around the nasal cavity.
We calibrated the magnification effect to mirror the expansion observed in the nasal cavity of mice. This allowed us to extrapolate and hypothesize the potential craniofacial changes in humans corresponding to those noted in mouse models. Our algorithm was designed to allow for selective magnification of specific anatomical regions within the skull. This approach ensured that while we magnified certain areas for detailed study, the rest of craniofacial anatomy remained undistorted and true to its original proportions.
+ Open protocol
+ Expand Check if the same lab product or an alternative is used in the 5 most similar protocols

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!

🧪 Need help with an experiment or choosing lab equipment?
I search the PubCompare platform for you—tapping into 40+ million protocols to bring you relevant answers from scientific literature and vendor data.
1. Find protocols
2. Find best products for an experiment
3. Validate product use from papers
4. Check Product Compatibility
5. Ask a technical question
Want to copy this response? Upgrade to Premium to unlock copy/paste and export options.