We Dissect Protocols

Mathematica 12

Manufactured by Wolfram
93 citations
Sourced in United States, United Kingdom
About the product

Mathematica 12 is a computational software package developed by Wolfram Research. It provides a wide range of mathematical capabilities, including numerical and symbolic computation, visualization, and programming. Mathematica 12 can be used for tasks such as data analysis, scientific research, and educational purposes.

Automatically generated - may contain errors

Market Availability & Pricing

Is this product still available?

Get pricing insights and sourcing options

93 protocols using «mathematica 12»

1

Fabrication of Ti6Al4V-Ti5Cu Composite via SLM and SPS

2025
Sheet-gyroid, also referred to as the double gyroid or gyroid foam due to both wall sides representing gyroid surfaces, served as the framework for filling Ti-5Cu powder. These sheet-gyroid TPMS scaffolds, possessing porosity levels of 70%, 80%, and 90%, were generated via computer-aided design software (Wolfram Mathematica 12) using Equation (1): φSkGx, y,z=sinaxcosax+sinaycosaz+sinazcosax=C
Equation (1) defines the sheet-gyroid TPMS scaffold, denoted as φSkG (x, y, z), as an implicit function involving x, y, and z. Here, C controls the matrix phase’s width, determining gyroid structure porosity, while ’a’ governs the TPMS surface’s periodicity. Porosity (P) of the sheet-gyroid unit cell structures was regulated by Equation (2), where vs represents solid unit volume, and V0 denotes periodic cube volume. The detailed design and porosity regulation could be found in our previous study [24 (link)].
P=1VSV0×100%
Conceptlaser’s Mlab-R SLM device is used to prepare the TPMS porous scaffold. The laser power is 95 W, the scanning speed is 900 mm/s, the thickness is 25 μm, and the laser track width is 0.11 mm. Post-production, Ti-5Cu powders (high-purity titanium powder and 5 wt.% high-purity copper powder were ball milled for 1 h) were filled and compacted into the sheet-gyroid scaffolds. The next stage consisted of sintering the Ti-5Cu powder mixture to form a Ti6Al4V-Ti5Cu composite. This sintering procedure was conducted through spark plasma sintering (SPS) at 920 °C, with a pressure of 50 MPa, heating up at 100 °C/min, and held for 5 min. The schematic diagram of manufacturing the Ti6Al4V-Ti5Cu composite by a two-step approach consisting of SLM and SPS is shown in Figure 1. SPS sintering equipment is shown in Figure 2.
+ Open protocol
+ Expand
2

Differential Proteomic Analysis of Silica-Coated Iron Oxide Nanoparticles

2025
Differential analysis was performed using R software. First, the significantly enriched proteins in Fe3O4@SiO2 NPs NPCs are selected. Screening criteria were as follows: fold change (FC) ≥ 1.2 or ≤ 0.83 and FDR‐adjusted P‐value of < 0.05. P‐values were adjusted using the Benjamini–Hochberg method. Functional enrichment analyses were performed according to Gene Oncology (https://www.geneontology.org). The org.Hs.eg.db and clusterProfiler packages in R were used to carry out GO analyses. Among the three GO terms (i.e., biological processes (BP), molecular function, and cellular component), BP was the most variable part and the six most significantly enriched BPs were selected to exhibit according to adjusted P‐value.
To quantify the ability of Fe3O4@SiO2 NPs in serum protein depletion and enrichment, mean squared error (MSE) was used and calculated by Wolfram Mathematica 12.1.0.0 software. Y = X was the predicted regression line, and thus the equation with a little modification was as follows:
MSEi=1ni=1nYiXi2
For triple‐protein assay, NSpC was used to describe the composition of NPC for better accuracy and was calculated as follows:
NSpCi=SpCi/MWii=1nSpC/MWi
+ Open protocol
+ Expand
3

Medication Administration Error Assessment

2025
Any single medication administration, including omission, was observed as the standard event, used also as a common denominator. Each medication administration could contain any number of medication errors. However, an omission (i.e., drug omission) excluded other medication errors since it was impossible to observe. For the purposes of MAE assessment, information from SmPC, factual databases (UpToDate, Micromedex, and Drugbank) and process standards of individual hospitals were used. All steps (data collection, data entry, and evaluation) were performed according to a standardised protocol. All medications were categorised following the ATC classification (WHO Collaborating Centre for Drug Statics Methodology 2022 ).
Data was analysed using Wolfram Mathematica 12 (Wolfram Research Inc., Illinois, USA). The output in the form of quantity and percentage for binomial variables (e.g., the occurrence of a given major MAE/specific MAE/procedural MAE) and in the form of mean ± standard deviation or in the form of median and quartiles for numerical variables (e.g., aggregated quantities of some type of major MAE/specific MAE/procedural MAE) was performed. The sum of relevant medication administration was used as the denominator.
The dependency of either major MAE or specific MAE frequency on the nurse or inpatient characteristics, respectively, was tested using the general linear model. In this model, clustering was considered only at the level of the nurse administering the drugs, while data at higher levels (wards and hospitals) were treated as independent and identically distributed. Results were considered statistically significant if the p‐value was below the Sidak‐adjusted threshold (Lee 2010 ): 110.051/number of majorMAE+specificMAE
For each of the significant dependencies, the impact was evaluated by estimating its effect size using η2 and classified as small (η2 > 0.01), medium (η2 > 0.06), and large (η2 > 0.14) according to Cohen's convention (Cohen 1992 (link)).
For the dependence of either major MAE or specific MAE frequency on the type of medication (according to the first level of the ATC class), their real occurrence with the probability distribution corresponding to the frequency of a related administration was compared by the Goodness‐of‐fit test. In this model the Sidak correction was also used.
The dependency of the occurrence of major MAE on procedural MAE was analysed using two methods. First, the generalised linear model with a Bernoulli distribution and logit link function (Fahrmeir and Tutz 1994 (link)) was used for estimation and testing the impact of any procedural MAE per se, assuming the other procedural MAE remains constant. Second, the decision tree model with the CHAID algorithm (Hastie, Friedman, and Tibshirani 2009 ) tested the combinations of procedural MAE with the critical impact on major MAE occurrences. The output in the form of a risk ratio with a 95% confidence interval and the threshold for significance of p‐value < 0.05 were used for both models.
+ Open protocol
+ Expand
4

Structural Analysis of Metal Complexes

2025
To characterize the structural features of the systems under study, we carried out Born–Oppenheimer ab initio molecular dynamics simulations for the bis(3-hydroxy-4-pyronato) oxovanadium(IV) complex, VO(3hp)2, and the bis(maltolato) zinc(II) complex, Zn(mal)2, in vacuum and in water clusters made of 25 molecules using the CP2K program [66 (link)]. We performed NVT (constant volume, constant temperature, using a Nosé thermostat [67 ]) simulations in cubic boxes with side-lengths of about 17 Å (the length varied ±2 Å from system to system) under periodic boundary conditions. We employed the generalized gradient approximation and the Perdew–Burke–Ernzerhof (PBE) exchange–correlation functional [68 (link),69 (link)]. The Grimme D3 approach is taken to account for dispersive interactions [70 (link)] and the dzvp-molpot double-ζ polarization basis [71 ] is used for the Geodecker–Teter–Hutter pseudopotentials [72 ]. A time-step of 0.5 fs was employed. In each case, we performed a thermalization phase under a velocity rescaling regime [73 (link)]. Typically, thermalization in the prepared systems was reached within 300 fs. Next, we conducted NVT sampling of 1 ps. The target accuracy for the self-consistent field convergence was 10−6 hartree. The cut-off and the relative cut-off of the grid level were set to 400 and 100 Rydberg, respectively. The energy convergence threshold was set to 10−12. We conducted data analysis as well as the preparation and training of the convolutional Neural Net using Mathematica 12, Wolfram Research Inc., Champaign, Illinois, USA.
+ Open protocol
+ Expand
5

Pharmacokinetic Analysis of Analytes

2025
One serum, urine, and saliva sample from each sampling time point and each participant was processed and analyzed as described above. Maximum concentration (cmax) and time to reach maximum concentration (tmax) in serum were determined for each participant. Elimination constant (ke) and elimination half-life (t1/2) of the analytes in serum were calculated using a non-compartmental analysis in the PKanalix 2024 software (Lixoft, Antony, France) applying Eq. (1).
For compartmental analysis the measured data were fitted in Wolfram Mathematica 12 (Wolfram Research, Champaign, USA) using the two-compartment model with first-order absorption.
Creatinine concentrations used to normalize urine concentrations of the analytes were determined in an external, accredited laboratory. Serum concentrations as well as urine concentrations normalized for creatinine were plotted against time using IBM SPSS Statistics 29.0 (IBM, Armonk, New York, USA).
+ Open protocol
+ Expand

Top 5 most cited protocols using «mathematica 12»

1

Optimization of Growth Curve Parameters

The computation of SSEopt (i.e. optimization) was done using Mathematica® 12.0 software of Wolfram Research. The authors provide a Mathematica file as supporting material. The output was exported to a spreadsheet, Microsoft Excel®. It is provided as a supporting material in the format (a, b, v0, p, q, SSEopt(a, b)) of the table.
For optimization, the grid-point exponent-pairs (a, b) were visited by means of an outer loop running through a = m·0.01 and an inner loop that for each a ran through the values b = a + n·0.01. Given (a, b), the optimization of p, q, and v0 was done using a custom-made variant of the method of simulated annealing [43 ], as this made the optimization process fully automated. A typical step of simulated annealing started with given candidates p, q, v0 > 0 for optimal parameters. The candidate parameters were altered by multiplying them with positive random numbers close to 1. (This preserved positivity in order to obtain bounded growth curves and insofar it differed slightly from the conventional approach of adding small random numbers.) These new parameters were accepted as new candidates, if for them SSE became smaller, but in order to escape from suboptimal solutions, with a certain probability new parameter values were also accepted, if for them SSE became larger. (Otherwise, the old candidates were retained.) This step was then repeated with the accepted parameter. At the begin of each inner loop, i.e. at a grid point near the diagonal of the form (a, a + 0.01), the optimization of p, q, v0 used 50,000 of these simulated annealing steps, starting with the estimate p = q = 1 and the initial condition v(t1) = v1 (first data point). For the subsequent grid points in the b-direction (inner loop), the previous optimization results were used as starting values and improved in 10,000 annealing steps. Thereby, after each 1000 steps the simulated annealing restarted with the hitherto best parameters. Further, the probability of accepting parameters with a higher SSE was lowered slightly. (This was made dependent on the hitherto obtained optimization results and is known as adaptive cooling: [43 ].) To assess the output, a plot of the near-optimal exponent-pairs with threshold 1% for near-optimality (i.e. SSEopt exceeded the minimum SSE by at most 1%) was visually inspected. Where the plot had a frayed appearance, the optimization exercise was repeated with more simulated annealing steps. CPU-time for the computations took about one week.
The optimal parameters from simulated annealing were finally used as starting values for a nonlinear regression for the model with the optimal exponent-pair, using standard methods to determine v0, p, q (Mathematica command NonlinearModelFit) and further improve SSEopt.
+ Open protocol
+ Expand

Corresponding organizations : BOKU University

2

Digitized Brain Contour Modeling for FBM Simulations

The outer border, ventricular spaces, and major white matter tracts of imaged sections were hand-traced in Adobe Illustrator CC by an expert trained in neuroanatomy. To reduce the number of contours, major tracts at the edge of the section were left out of the border contour. The contours were imported into Wolfram Mathematica 12. The outer border contour was split along the median (sagittal) symmetry line. The right side of the contour was smoothed with a moving average and reflected on the left side. The same procedure was used for internal contours symmetric with respect to the median line (e.g., the cerebral aqueduct). Inner contours away from the median line (e.g., the fornix) were smoothed on the right side with a moving average and reflected on the left side. Therefore, the final digitized contours (rational-valued arrays of X- and Y-coordinates) were perfectly bilaterally symmetric, compensating for minor sectioning plane deviations and real (minor) brain asymmetries.
The obtained contours were next reformatted for FBM simulations. They were transformed into N × 2 matrices, the rows of which represented consecutive, integer-valued Y-coordinates and the two columns of which represented the leftmost and rightmost X-coordinates of the contour (also integer-valued). To arrive at this format, the original contour coordinates were divided by an integer factor that after rounding produced at least four X-values for each consecutive Y-value, and the minimal and maximal X-values were chosen. Since this procedure effectively reduced the size of the contour, it was enlarged back to its original size by multiplying the integer coordinates by the same factor and filling in the new, empty rows with X-values obtained by linear interpolation between the nearest available X-coordinates. Because this format cannot encode concavities oriented along the Y-axis (e.g., the third ventricle), such concavities were stored as separate inner contours. In the study, all such concavities were centered on the median line, which allowed their easy capture with the maximal X-value left to the line and the minimal X-value right to the line. For the purpose of this study, all inner contours were treated as impenetrable obstacles, irrespective of their physical nature (ventricular spaces, outer border concavities, white matter tracts).
The computer simulations described in detail below were performed on the Frontera supercomputing system (NSF, Texas Advanced Computing Center).
+ Open protocol
+ Expand

Corresponding organizations : University of California, Santa Barbara, University of Potsdam, Missouri University of Science and Technology

3

Spatial Patterns of Myrica cerifera on Hog Island

To investigate the spatial patterns of M. cerifera on Hog Island, detailed maps of evergreen shrub cover were created using georectified aerial photography and hyperspectral imagery. Cloud-free aerial photography was obtained from USGS Earth Explorer for the following dates: 2 Dec 1972 (color infrared), 5 Jul 1986 (color infrared), 5 Jul 1990 (color infrared), and 20 Mar 1994 (RGB). In 2013, hyperspectral imagery was available for Hog Island, VA (48 band hyperspectral)42 . A seamless mosaic was performed on the multiple images which made up the 1972 scene and imagery resolution ranged from 0.41 to 1 m2. Regions of interest (ROI) were selected in each year for shrub cover using the bands available in each image (ENVI 5.5.3, LH3 Harris Geospatial) based on geo-rectified aerial photography, field surveyed woody thickets of known age using a Trimble Geo-XT GPS unit43 (link), and woody thicket sampling locations of known age44 (link). Shrub thicket homogeneity and high leaf cover, and the evergreen leaf habit relative to the otherwise sparse grassland cover and diversity in the system create distinct boundaries that are ideal for interpretation of shrub cover43 (link). After ROIs were selected, supervised classifications were performed using the maximum likelihood method. Accuracy assessments were performed for each classification (Supplementary Table 1).
The resulting shrub cover was exported to ArcGIS 10.7 (ESRI) and then exported to the program FRAGSTATS 4.2 for spatial pattern analysis45 . We calculated the size (area, m2) of each shrub patch identified in satellite images. The study area, number of shrub patches, and range of patch size from 1972 to 2013 were summarized in Supplementary Table 2. We then fitted a power law using the NonlinearModelFit function in Mathematica 12.0 (Wolfram Research, USA) to the inverse cumulative distribution of shrub patch sizes, defined as P(A ≥ a), the probability of a cluster area A being greater than or equal to a given value a35 (link). This approach has been demonstrated to provide more robust estimates of distribution parameters than other traditional approaches including fitting frequency distribution25 (link),46 (link).
+ Open protocol
+ Expand

Corresponding organizations : University of California, Berkeley, Virginia Commonwealth University, Texas A&M University

4

Single-Molecule FRET Analysis of RNA Folding

Under our experimental conditions, the confocal volume is most often devoid of fluorescent molecules, with an average occupancy of less than 0.01 molecules. Nevertheless, individual fluorescently labeled molecules will inevitably diffuse through the confocal volume, where they will be excited by the two alternating lasers, resulting in a transient burst of fluorescent photons (Fig. 2B). The detection records of the photons emitted from these freely diffusing fluorescently labeled RNA molecules were analyzed using Mathematica 12.0 (Wolfram Alpha) in conjunction with Fretica, a C++ based MATHLINK module for the analysis of time-correlated single-photon-counting single-molecule FRET data (40 ). The data analysis workflow employed during this research is as follows: first, time-gating was used to determine which laser (either 515 or 642 nm) was active for every detected photon in each of the four streams. Then, photons were assigned to 500 μs time bins based on their absolute arrival time (Fig. 2C). Time bins with a total photon count rate (Totn = 1n515 + 1n642 + 2n515 + 2n642 + 3n515 + 3n642 + 4n515 + 4n642) of less than 20 photons per bin were used to calculate the average background photon count rate for each of the four streams during either 515 or 642 nm excitation. Then, corrected photon count rates (N) were determined for all time bins by accounting for background, spectral cross talk, direct excitation of the acceptor, and nonidentical excitation and detection efficiencies of the donor and acceptor fluorophores. Those time bins with a corrected total photon count rate (TotN = 1N515 + 1N642 + 2N515 + 2N642 + 3N515 + 3N642 + 4N515 + 4N642) of more than 20 photons per bin were considered bursts of fluorescence arising from single molecules diffusing through the confocal volume (41 (link)). Corrected photon count rates associated with the acceptor and donor fluorophores (AccN = 1N + 3N and DonN = 2N + 4N, where TotN =AccN + DonN) during 515 and 642 nm excitation were used to calculate values for both the FRET efficiency (E) and fluorescence stoichiometry (S) using Eq. 1 and Eq. 2, respectively (Fig. 2D). E=N515Acc/(N515Acc+N515Don). S=N515Tot/(N642Tot+N515Tot).
The fluorescence stoichiometry (S) of a burst arising from a molecule containing only donor or acceptor fluorophores will yield values near S = 1 and S = 0, respectively. Therefore, values of fluorescence stoichiometry are used to restrict our analysis and interpretation of FRET efficiencies to only those bursts arising from molecules containing active donor and acceptor fluorophores (i.e., 0.25 < S < 0.75). This allows us to effectively filter out unwanted contributions from any potential donor-only or acceptor-only molecules that, for example, may not have been removed during the HPLC purification of the RNA constructs.
The FRET efficiency (E) values from bursts with TotN > 50 were then compiled into histograms (Fig. 2D), in which the widths of the resulting distributions were largely limited by shot noise (Fig. S1). Histograms were then fitted using Gaussian distributions to determine the mean FRET efficiency, ⟨E⟩, and fractional abundance, Θ, of the folded and unfolded subpopulations. Based on several repeated measurements under identical conditions, typical experimental uncertainties associated with ⟨E⟩ and Θ are ±0.02 and ±0.03, respectively. The fractional abundance of each subpopulation was used to calculate the equilibrium constant for folding (Kfold = Θf/Θu) and thus the standard state Gibbs free energy difference (ΔG°fold = −RT ln Kfold, where R is the gas constant) between the two subpopulations at T = 294.2 K. For the ease of data interpretation, all ΔG°fold values in apolar solvent conditions were referenced to aqueous conditions, i.e., ΔΔG°fold = ΔG°fold (mixture) − ΔG°fold (H2O).
+ Open protocol
+ Expand

Corresponding organizations : University of Kansas

5

SAXS Analysis of CNC Suspensions

Scattering patterns of CNCs suspensions were collected using SAXSLAB GANESHA 300-XL Xenocs, Grenoble, France. CuΚα radiation was generated by a Genix 3D Cu-source with an integrated monochromator, 3-pinhole collimation, and a two-dimensional Pilatus 300 K detector. The scattering intensity I(q) was recorded in the λ interval of 0.007 < q < 0.25 Å−1 (corresponding to the length scale of 25–900 Å), where the scattering vector is defined as q = (4π/λ) sin θ, with 2θ and λ being the scattering angle and wavelength, respectively. The measurements were performed under vacuum at ambient temperature. The suspensions were sealed in thin-walled quartz capillaries about 1.5 mm in diameter and with 0.01 mm wall thickness. The scattering curves were corrected for counting time and sample absorption. The 2D SAXS patterns were azimuthally averaged to produce one-dimensional intensity profiles, I vs. q, using the two-dimensional data reduction program SAXSGUI. The scattering spectra of the solvent were subtracted from the corresponding solution data using Igor Pro 9 from WaveMetrix Portland Oragon for analysis of small-angle scattering data [36 (link)]. Data analysis was based on fitting the scattering curve to an appropriate model provided by Mao et al. [37 (link)] using Wolfram Mathematica 12.3 software, Champaign, IL, USA.
+ Open protocol
+ Expand

Corresponding organizations : Ben-Gurion University of the Negev, Hebrew University of Jerusalem

The spelling variants listed above correspond to different ways the product may be referred to in scientific literature.
These variants have been automatically detected by our extraction engine, which groups similar formulations based on semantic similarity.

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!

🧪 Need help with an experiment or choosing lab equipment?
I search the PubCompare platform for you—tapping into 40+ million protocols to bring you relevant answers from scientific literature and vendor data.
1. Protocol search & design
(papers, patents, application notes)
2. Protocol validation
(from literature and MDAR)
3. Lab Product search
4. Product validation from literature
5. Troubleshoot product/ protocol
6. Instant figure generation New
Want to copy this response? Create your account to unlock copy/paste and export options.