We Dissect Protocols

Mathematica 8

Manufactured by Wolfram
106 citations
Sourced in United States, United Kingdom
About the product

Mathematica 8 is a comprehensive software system that combines a powerful programming language with a wide range of mathematical and computational capabilities. It is a robust and versatile tool for performing numerical and symbolic computations, data visualization, and algorithm development. Mathematica 8 offers a unified interface for working with various types of data, including numerical, symbolic, and graphical representations. Its core function is to provide a powerful and flexible platform for scientific and technical computing.

Automatically generated - may contain errors

Market Availability & Pricing

The Mathematica 8 software is no longer actively commercialized by Wolfram. The latest version, Mathematica 14.2.1, was released in April 2025 and is the recommended replacement. While second-hand copies of Mathematica 8 may be available on various marketplaces, potential buyers should be aware that older software versions may lack support and compatibility with current systems. For the most up-to-date features and support, it is advisable to consider the latest version of Mathematica.

Need Operating Instructions, SDS, or distributor details? Just ask our AI Agent.

Is this product still available?

Get pricing insights and sourcing options

106 protocols using «mathematica 8»

1

Computational Modeling of Proto-Cell Systems

2023
The modeling work was carried out on a PC, and the used models were coded in the Mathematica 8.0 software environment (Wolfram Research). Wolfram Workbench 2.0 (Wolfram Research) was used for model building and editing. A home-built software tool written in Java was used to carry out semiautomated comparisons of different models in Mathematica m-file format by creating an xls-format report of differences between models. These reports outlined the differences in input variables and equations while ignoring irrelevant aspects such as the order of the variables.
The Mathematica Solve algorithm was used to carry out calculations of models that consisted of determined (equal number of equations and output parameters) systems of non-linear equations. Calculation results were parsed from Mathematica output files, and data arrays were generated in MS Excel using a home-built converter written in Java. The converter also carried out some basic calculations on the data for correlation analysis.
Conventional software tools were used to create schemes of proto-cell models (Fig. 3 and Supplementary Figs. 19). Combined Figs. 1 and 2 were created with Mathematica 10 using several functions for the visualization of 2D graphs (Wolfram Research).
+ Open protocol
+ Expand

Corresponding organizations : Tallinn University of Technology

2

Quantifying Neuritogenic and Neuroprotective Effects of NPs

2022
For estimation of the
H3/H6-AuNP/NS potencies in the neurite outgrowth assay (Figure 6), experimental data (neurite
length vs NP/NS concentration) were fitted to the
first-order Langmuir equation using Mathematica 8.0 software (Wolfram
Research, USA): where efficacy (the maximal
neuritogenic effect of a given nanocompound) and potency (the median
effective compound concentration) are the estimated parameters, basal_length
is the basal length of neurites in the absence of stimulation (corresponding
to [NP/NS] = 0, Figure 6), and [NP] is the nanocompound concentration ([NP/NS], Figure 6).
To estimate
the H3-AuNP/NS potency in the H2O2-induced neurototoxicity
assay (Figure 7B),
a similar first-order Langmuir equation was used: where efficacy (the maximal
prosurvival effect of a given nanocompound) and potency are the estimated
parameters, basal_survival is the basal survival of the H2O2-treated neurons in the absence of nanocompounds (corresponding
to [NP/NS] = 0, Figure 7B), and the rest of the parameters are defined as in eq 2.
+ Open protocol
+ Expand

Corresponding organizations : London Centre for Nanotechnology, University of Copenhagen, King's College London, University College London, The Francis Crick Institute, Hammersmith Hospital, Imperial College London

3

Transcriptional Bursting Analysis using MLE

2021
Maximum‐likelihood estimation (MLE) was used to select burst frequency (ka) and burst size (b = kt/ki) parameters that best fit the measured mRNA distributions to the full analytical solution to the two‐state stochastic gene expression model (Peccoud & Ycart, 1995 ). Although this is a steady‐state solution, we use it here to approximate how TNF affects transcriptional bursting (Wong et al, 2018 (link)). We assumed that the two alleles for each gene were independent and that bursting was sufficiently infrequent such that bursting events were unlikely to overlap, allowing a reasonable estimate of burst size and an upper bound on the estimate of burst frequency by modeling transcription from a single allele. MLE was performed as numerical minimization over the negative log‐likelihood function defined over the probability density function (pdf) given the observed experimentally determined RNA distributions for each condition using the method of moments. As previously reported, mRNA distributions are not sufficient to independently determine the promoter inactivation rate ki and the transcription rate kt. Using a previously described method (Raj et al, 2006 (link); Dey et al, 2015 (link)), we held the transcription rate kt constant across all conditions and reported b. Sensitivity analysis of the kt value for each gene suggested that our results are largely independent of the kt value chosen for each gene (Appendix Fig S2B). MLE was implemented using custom code in Mathematica 8 (Wolfram Inc.) as previously described (Dey et al, 2015 (link)). The model was fit to smFISH distributions from combined replicates except for the Nfkbia TNF 1‐h time point. The model was unable to produce a fit for the combined dataset and thus replicates were fit individually. An example fit is included in Fig EV3, but the burst size and burst frequency were not reported due to this discrepancy.
+ Open protocol
+ Expand

Corresponding organizations : Yale University, Dana-Farber Cancer Institute, Harvard University

4

Multimodal Experiments on Hand Preshaping

2021
Designs for motor execution and imagery experiments were based on a previous work [56 (link)] and relied on a delayed grasping task after a visual presentation of the target objects. More specifically, in each trial, a picture of the target object was visually presented for 2 seconds, then, after a 4-second pause, an auditory cue prompted the actual task: participants had to preshape the hand as if they were grasping the target object to use it (for the execution group) or imagine a preshaping movement without moving their hand (for the imagery group). A 10-second interval separated 2 subsequent trials. Twenty different target objects were used for this study (see Table 6 for a list), and, in each experiment, movements were repeated 5 times, for a total number of 100 trials, organized in 5 fMRI runs, each lasting 5 minutes 44 seconds, including 12 seconds of rest at the beginning and at the end of each run to achieve a measure of baseline fMRI activity. The experimental paradigm for execution and imagery experiments was coded using Presentation (Neurobehavioral System, Berkeley, CA), and presented with an MR-compatible monitor at the resolution of 1,200 × 800 pixels, and a mirror mounted on the MR coil. During the observation experiment, participants watched short videos of preshaping movements towards an object from the same set adopted in the other experiments. In each trial, the video was followed by a task that implied a judgment on the target of the preshaping gesture. To create videos, we used vectors of joint angles (according to a 24 DoFs model) corresponding to the common starting posture and to the 20 final object-specific postures, recorded in a previous study [56 (link)]. Intermediate hand configurations (i.e., posture vectors) between the initial and final postures were obtained from linear interpolation between the values of each kinematic joint angle in the initial and final hand postural configurations. The resulting 30 vectors of joint angles were plotted as 3D renderings, using Mathematica 8.0 (Wolfram Research Inc, Champaign, IL, USA), saved as png images (size: 800 × 600 pixels), and converted to 1 second-long videos at a frame rate of 60 Hz. Five sets of 20 videos were created, showing the hand rendering as seen from 5 different viewpoints, obtained by changing the values of azimuth and elevation. During the fMRI experiment, participants performed 5 runs, each comprising 20 trials. During each trial, the video was presented (1 second), followed by a black fixation cross at the center of the screen (7 seconds). Then, the judgment task (2-alternatives forced choice) was presented, and participants were shown the black/white pictures of 2 objects (size: 250 × 250 pixels)—the target of the preshaping gesture previously shown and a randomly chosen alternative—and asked to press the left or right key on an MR-compatible keyboard to select the actual target of the preshaping movement. After the task, the same black fixation cross was shown for 6 seconds. Each run comprised the presentation of the full set of 20 videos (20 objects), always from the same viewpoint; the 5 different viewpoints were presented in separate runs. Each run started and ended with 10 seconds of rest and lasted in total 5 minutes 40 seconds. The experimental paradigm was delivered with an MR-compatible monitor at the resolution of 1,200 × 800 pixels, and a mirror mounted on the MR coil, using the e-Prime 2 software package (Psychology Software Tools, Pittsburgh, PA, USA). Owing to hardware failure, behavioral responses from 2 participants could not be recorded. For all experiments, participants performed a familiarization run, outside the MR scanner, to ensure that they correctly understood the procedures.
+ Open protocol
+ Expand

Corresponding organizations : Piaggio (Italy), University of Pisa, Italian Institute of Technology, Technical University of Munich, IMT School for Advanced Studies Lucca, University of Zurich, Medizinische Hochschule Hannover

5

Quantitative MRI Analysis of Atherosclerotic Plaque

2021
Image analysis was performed using a custom-built algorithm in Mathematica 8.0 (Wolfram Research, 2010, Champaign, IL, USA). T2 maps were generated by pixel-wise fitting of the signal intensity as function of TEeff with a mono-exponential decay function. An ROI was manually drawn around the plaque on the image with shortest TEeff for each of the three time points. To validate this plaque ROI drawn on the BCA view, ROIs encompassing the plaque were also drawn on the axial black-blood images. From the relative 3D orientation of these slices, the intersections of axial ROIs with the BCA view were determined, and agreement with these perpendicular ROIs was inspected visually. A filter was applied to select all pixels in the plaque ROI with an R2 of fit larger than 0.7. Subsequently, the mean plaque T2 was determined by averaging of R2 = 1/T2 over all remaining pixels. Additionally, ΔR2, the difference between pre- and post-injection R2, was calculated because this parameter is directly proportional to the difference in iron oxide concentration. The plaque area on MRI was determined by multiplying the total pixel count in the plaque ROI with the in-plane pixel area (0.05 × 0.05 = 0.0025 mm2).
+ Open protocol
+ Expand

Corresponding organizations : Maastricht University, Amsterdam University Medical Centers, University of Amsterdam

Top 5 most cited protocols using «mathematica 8»

1

Constraint-based Flux Balance Analysis

This cFBA framework has been coded using Python™ programming language and requires CBMPy (http://cbmpy.sourceforge.net/) package and IBM ILOG CPLEX Optimizer by IBM. CBMPy has been developed based on the principles of Python Simulator of Cellular Systems (PySCes) [37] . In supplementary materials; scripts for phase plane analyses shown in Figure 2B and 2C coded in Wolfram Mathematica® 8, python scripts and data for analyses depicted in Figures 2A, 3B and 3C and all the stoichiometric models (SBML format) used in this paper are provided.
+ Open protocol
+ Expand
2

Bloch-McConnell Simulations for R1ρ NMR

All of the simulations carried out in this study assume two site exchange between a GS and ES. Synthetic carbon R1ρ relaxation data at 14.1 T magnetic field strength consisting of 150 data points with varying resonance offset (ΔΩ) and power (ω1) (listed in Table S1) reflecting values typically used in our nucleic acid studies[33 (link), 37 ] were computed by solving the Bloch-McConnell equations (Equation 3) numerically using Mathematica 8.0 (Wolfram Research). To be consistent with the R1ρ experimental data analyzed in this study, we generated the synthetic Bloch-McConnell R1ρ data assuming the 13C selective 1D R1ρ experiment by Hansen et. al..[24 (link), 31 (link)] In this experiment an equilibration period exists prior to alignment with the effective field (ωG or ωeff for exchange on the slow and intermediate-fast timescale, respectively) for the spin lock period (Figure 1). The equilibration period allows for the build up of excited state magnetization that ends up aligned via a hard pulse with the effective field (ωG or ωeff, Figure 1). Simulations of the spin lock period with the Bloch-McConnell equations showed that this introduces negligible deviations relative to the conventional scheme for the asymmetric populations and intermediate/fast exchange rates examined here and in our previous dispersion studies of nucleic acids.[24 (link), 33 (link), 37 ] However, for relatively large pE > 15% and slow exchange (kex/Δω < 0.5) the above scheme can result in oscillations due to precision of the initial ES magnetization that can be mitigated by simple modification of the pulse sequence as described by Kay[31 (link)] and more recently by Zhang[15 (link)]. R1ρ was computed by fitting the resultant projection of the net magnetization of the ground and excited states following the relaxation period along
ωeff=(ΔΩ2+ω12)1/2 as a function of time to a monoexponential decay. For fast exchange (kex/Δω ≥ 2), R1ρ was calculated by initially aligning both the GS and ES magnetization along the average effective field (ωeff) and by projecting the resultant GS and ES magnetization following the spin lock period about the average effective field (ωeff) (Figure 1). For slow exchange (kex/Δω ≤ 1), R1ρ was calculated by initially aligning both the GS and ES magnetization along the GS effective field (ωG) and by projecting the resultant GS and ES magnetization following the spin lock period about the average effective field (ωeff) (Figure 1). We note that for systems in slow exchange, and when using the initial magnetization preparation scheme that aligns both the ES and GS along the GS effective field, the actual observed projection would be about the ground state effective field (ωG).[24 (link)] Simulations using the Bloch-McConnell equations show that this can lead to small deviations relative to projections about the average effective field for a small subset of conditions examined here involving slow exchange, low spin lock fields of ω1 < 1000 Hz and for pE >15%. However, these deviations have negligible effects on the extracted exchange parameters. Similar exchange parameters are obtained when fitting 5% noise corrupted R1ρ data computed assuming projections along either the average (ωeff) or ground state (ωG) effective fields (data not shown). We also note that we carried out simulations for scenarios fast on the chemical shift timescale (kex/Δω ≥ 2) in which we calculated R1ρ by initially aligning both the GS and ES magnetization along GS effective field (ωG) and obtained nearly identical uncertainties to those reported in Table 1 when fitting 5% noise corrupted data.
The synthetic R1ρ data was uniformly noise corrupted assuming 5% uncertainty by randomly selecting a value from a normal distribution centered at the R1ρ value with a standard deviation set equal to 5%. Standard simulations assumed Δω = 2.12 ppm (2,000 s−1), pE of varying asymmetry (5%, 15%, 30%), exchange rates (kex) ranging between 1,000 s−1 and 30,000 s−1 spanning kex/Δω values between 0.5 and 15. The R2 and R1 relaxation rate constants were equal to 11.0 s−1 and 1.5 s−1, respectively to be consistent with prior theoretical studies examining R1ρ and its associated algebraic expressions.[12 (link), 27 , 29 (link), 43 (link)] As shown below, increasing the value of R2 but assuming a constant signal to noise ratio results in larger uncertainty in the extracted exchange parameters (pE, Δω, kex) due to the increase in the relative contribution of R2 to transverse relaxation as compared to Rex.
+ Open protocol
+ Expand

Corresponding organizations : University of Michigan–Ann Arbor, Duke University

3

Genome-Wide Analysis of Transcriptional Changes

The transcription levels were determined as the log-scale mRNA concentrations (pM). The transcriptional changes (fluctuations) were calculated as the difference between the two transcription levels. Although several methods for the determination of chromosomal periodicity have been reported [4 (link),32 (link)], a simple approach was used here. Each expression value that had been determined with the method described in the previous section was projected onto the genome site that corresponded to the gene position, using 100-base bins. Next, the series of expression levels along the genome was smoothed with a moving average of 500 bins (50 kb). The periodicity was calculated using a standard Fourier transform, and the significance of the periodicity was assessed with the chi-squared test. The approximate line of the periodicity was calculated using the highest peak (statistic significance) of the periodogram and was fitted by minimizing the square error between the approximate line and the series of expression values. Gene set enrichment analysis (GSEA) was performed according to the original report [36 (link)] using the available online tools (http://www.broadinstitute.org/gsea/index.jsp). The TF and sigma factor gene regulation datasets were from RegulonDB. To compare the responses of MG1655 and MDS42, we limited the gene sets to those genes that were included among the 3710 common genes. The pre-ranked gene lists used as the input for GSEA comprised genes filtered by the absolute value of their expression difference between MG1655_hs and MG1655 (or MDS42 and MG1655). The regulatory networks were visualized using Gephi software (http://gephi.org). Subsequently, the Bioconductor software package RankProd [34 (link)], which is based on the rank product method, was employed to identify the differential gene expression caused by genome reduction and the heat shock response. The rank product method is a nonparametric statistical method derived from biological reasoning that detects items that are consistently ranked high or low in a number of lists. An advantage of this method is its ability to identify biologically relevant expression changes among a relatively small number of samples [35 (link)]. Finally, the statistical analyses, with the exception of GSEA, were performed using R software [48 ] (http://www.r-project.org), and the array plot (heat map) of the gene categories was constructed using Mathematica 8 (Wolfram Research).
+ Open protocol
+ Expand

Corresponding organizations : Osaka University

4

Electrophysiological Techniques for Neuronal Analyses

Analysis was performed using Igor Pro (Wavemetrics), C-Stimfit (C. Schmidt-Hieber, University College London, and P.J.; http://code.google.com/p/stimfit/), Excel (Microsoft) and Mathematica 8.01 (Wolfram Research). Action potential peak amplitude was measured from threshold (50 V s−1 for soma or axon and 10–50 V s−1 for dendrite). Action potential duration was determined at half-maximal amplitude, using threshold and peak as reference points. Latency differences between somatic and dendritic action potentials were quantified using the time points at half-maximal amplitude in the action potential rising phase. To determine the action potential initiation site, we fit action potential latencies with a third-order polynomial function. Average spike frequency was determined from the number of action potentials during a 1-s depolarizing current pulse. Amplitude of dendritic spikes was measured after subtraction of appropriately scaled subthreshold responses. Activation and inactivation curves were fitted by Boltzmann functions of the form f(V) = A[1 + exp[(VV0.5)/k]−1 + B, where V0.5 is the midpoint potential, k is the slope factor, and A and B are amplitude factors. For display of activation and inactivation data, values were normalized to the maximal value of the fitted curve. Statistical significance of differences between midpoint potentials at soma and dendrites was tested by bootstrap analysis.
Values indicate mean ± s.e.m. Membrane potentials are given without correction for liquid junction potentials. Significance of differences was assessed by two-sided nonparametric Wilcoxon signed rank or Mann-Whitney tests at a significance level of P < 0.05. Significance of correlations was tested using the Spearman rank correlation test (Igor Pro). For the calculation of current and conductance density, membrane patch area was estimated from pipette resistance using a previously established empiric relation52 . Distances were measured from the center of the soma to the tip of the dendritic or axonal recording pipette. Video images were acquired with DScaler (the DScaler project team) and analyzed using ImageJ (W. Rasband, US National Institutes of Health). The width of the different layers was measured using biocytin-labeled CA3 pyramidal neurons.
+ Open protocol
+ Expand

Corresponding organizations : Institute of Science and Technology Austria

5

Stochastic Gene Expression Modeling

RNA distributions were fit using maximum-likelihood estimation (MLE) of model parameters using the full analytical solution to the two-state stochastic gene expression model (Peccoud & Ycart, 1995 ). MLE was implemented using custom code in Mathematica 8 (Wolfram Inc.) as numerical minimization over the negative log-likelihood function defined over the pdf given the observed RNA counts the rate of RNA degradation set to our experimentally determined rate and transcription rate assumed to be constant across integration sites as previously discussed (Skupsky et al, 2010 (link)). In this manner, the effective fit parameters are the burst frequency and burst size.
+ Open protocol
+ Expand

Corresponding organizations : University of California, Berkeley, Lawrence Berkeley National Laboratory, Joint BioEnergy Institute

The spelling variants listed above correspond to different ways the product may be referred to in scientific literature.
These variants have been automatically detected by our extraction engine, which groups similar formulations based on semantic similarity.

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!

🧪 Need help with an experiment or choosing lab equipment?
I search the PubCompare platform for you—tapping into 40+ million protocols to bring you relevant answers from scientific literature and vendor data.
1. Protocol search & design
(papers, patents, application notes)
2. Protocol validation
(from literature and MDAR)
3. Lab Product search
4. Product validation from literature
5. Troubleshoot product/ protocol
6. Instant figure generation New
Want to copy this response? Create your account to unlock copy/paste and export options.