Our technical experts have compiled a wide variety of information on our products, systems, and applications. Whenever you need it, this database serves as a technical resource developed specifically to answer your questions and assist you with troubleshooting.
Select a topic below to browse the Knowledge Base.
The Prep Station takes between 1.5-2.5 hours depending on the number of samples being processed.
If the power lapse is long enough that the instrument loses power then the reagents and samples will need to be discarded. It is highly recommended that both instruments (Prep Station and Digital Analyzer) are connected to an Uninterrupted Power Supply.
Yes, a stylus can be used with the touch screen on both the Prep Station and the Digital Analyzer.
The deck layout validation verifies that all of the consumables and reagents have been placed properly on the deck.
The waste container should not be lined with plastic, as a liner could cause a crash due to tips building up higher than expected. The waste container can be cleaned with a 70% Ethanol solution.
Due to normal transit observations, Prep Plates stored in an incorrect orientation for a few days will not have a significant impact. However, the longer the plates remain in an incorrect orientation will increase the risk of test failure. It is highly recommended that the Prep Plates are stored in their proper orientation as soon as possible. Always check for leakage before using.
Yes, the Prep Station will ask to confirm the action before aborting a run.
The cartridge can be stored for up to 1 week, protected from light and at 4°C.
Scanning takes about 25 minutes per well; 1—4.5 hours per cartridge.
You can pause the Digital Analyzer at any time, and the scanning of other cartridges will not be affected. If pausing is attempted and does not work please allow the Digital Analyzer to complete scanning, then download the Log Files and send them to firstname.lastname@example.org for assistance in troubleshooting.
It is recommended that upon completion of scanning the cartridge is removed, wrapped in foil or placed in an opaque box and stored at 4°C. Cartridges can be stored for up to one week and discarded in accordance with your laboratory regulations.
Power cycling of the instruments is recommended at minimum every 14 days. It is important to ensure that samples are not processing on either instrument when they are turned off. Instructions on power cycling the instruments can be found in the nCounter System User Manual.
Routine maintenance is required for optimal system performance. Please consult the user manual or watch our training video for more information. Some planned maintenance is included with a service contract and performed by your Field Service Engineer.
NanoString recommends having a field service engineer on-site to assist you with moving your nCounter Analysis System, even over short distances. In some cases, this assistance may be covered under an active service contract agreement. Please contact email@example.com for more information.
The nCounter Pro Analysis System has been validated to support a 21 CFR Part 11 environment; see our summary document for additional information. The nCounter Sprint Profiler has not been validated to this standard.
The cartridge has been specified to perform with volumes from 25 – 35 µL. Please note that the user can add water to the hybridized sample in order to be in the appropriate range.
It is important that you depress the pipette plunger to the second stop and create an air gap behind the sample when loading your SPRINT cartridge. This facilitates consistent sample processing.
These air gaps do not need to be the same size, as the instrument will correct most variation. Always pull the pipette away from the sample port before releasing the plunger, instead of releasing the plunger immediately. The microfluidic channels are a closed system, and doing so will withdraw the sample back into the tip.
If your SPRINT has software version 2.0 or later, you can redefine the associated RLF with completed runs from your instrument. From the Web App, click the Download Logs option from the main menu, then choose your desired run. Click Fix RLF to upload a new RLF and virtually rescan your data. If you need to upgrade your instrument software to version 2.0 or later, please contact firstname.lastname@example.org.
Additional maintenance tasks may need to be performed as necessary to ensure proper operation of the instrument. Planned maintenance is included in a service contract. Please refer to the User Manual for more details.
Unlike the MAX or FLEX systems, the SPRINT does the work of the Prep Station and Digital Analyzer in one. Sample processing and scanning occur in one run, and these two processes cannot be dissociated. Once a SPRINT cartridge has been processed and scanned, it cannot be used again.
If your SPRINT has software version 2.0 or later, a virtual re-scan is possible if you selected an incorrect RLF file on your initial run. From the Web App, click the Download Logs option from the main menu, then choose your desired run. Click Fix RLF to upload a new RLF and virtually rescan your data. If you need to upgrade your instrument software to version 2.0 or later, please contact email@example.com.
The SPRINT is designed to keep a variety of instrument logs in case troubleshooting or other support is needed. In the case where the instrument experiences an error outside of a run, please send the instrument logs to firstname.lastname@example.org for further assistance. To download the instrument logs that are not associated with a particular run, please use the following instructions:
Open the Web App, select Administration, then select Download Logs.
Under Log Type, select System, uncheck the box next to Only Include Most Recent Logs, select the Updated From box and select the desired date, select Apply, then select Download.
Assays and Applications
nCounter technology is ideal for a wide range of discovery and translational research applications, including gene expression analysis, solid tumor profiling, immune-oncology profiling, gene fusion analysis, single-cell gene expression analysis, miRNA expression analysis, copy number variation analysis, lncRNA expression analysis, and ChIP-String expression analysis. Additionally, multiple analytes can be profiled within a single experiment, allowing for maximum flexibility on projects where simultaneous digital detection of RNA, DNA, and protein is paramount.
There are three main differences between these two products:
Difference in Build: A Custom CodeSet has the gene-specific sequences built in to the reporter and capture probes whereas a TagSet would require the user to order gene-specific oligonucleotides.
Level of Multiplexing: A Custom CodeSet can multiplex up to 800 genes and a TagSet up to 192 targets.
Difference in Workflow: If ordering a Custom CodeSet, NanoString will provide all of the probes necessary for the reaction at the appropriate concentrations. For a Custom TagSet, NanoString provides the recommended sequence information, and you will need to order the oligonucleotides and prepare them at recommended concentrations before adding them to the hybridization reaction.
Because capture and reporter probes are added in excess in each nCounter reaction, a highly overexpressed target gene may prevent the detection of low-abundance target by saturating the available surface area on a cartridge. An over-abundance of one type of probe-target complex can reduce the chances that a low abundance target will be able to bind and be detected. In essence, a highly overexpressed probe will occupy more of the available binding “real estate” on a cartridge. Attenuation is a strategy to reduce the number of over-abundant probe-target complexes bound to a cartridge and therefore increase detection of less-abundant targets.
If attenuation is necessary, the first step is to run at least one sample to determine the total raw counts for all genes (no attenuation). In parallel, run the same sample with 90% attenuation by adding 180 pM of “cold” Reporter Probe oligo (Reporter Probe without the barcode) to the hybridization reaction containing 20 pM active Reporter Probe. If attenuation is necessary, 90% will be a robust attenuating factor for almost any gene. Minimally, this test requires only half a cartridge to process.
It is important that the un-attenuation sample counts be below the saturation threshold. If binding density is > 2, repeat the sample with ¼ of the RNA input. Reducing the input will ensure that the attenuation measurement will be accurate.
Unfortunately, tRNAs are not compatible with nCounter. The sequence composition, length, and distribution of sequence diversity in tRNAs are not amenable to our platform.
Yes. NanoString’s nCounter technology is based on a novel method of direct molecular barcoding and digital detection of target molecules through the use of color-coded probe pairs. The nCounter miRNA Sample Preparation Kit provides reagents for ligating unique oligonucleotide tags (miRtags) onto the 3’ end of target miRNAs, allowing short RNA targets to be detected by nCounter probes.
NanoString cannot guarantee the performance of a component if it has not been stored at the recommended temperature. These reagents should be used at your own risk.
CodeSet: Improper storage of the CodeSet
Multiple F/T cycles can be detrimental to the CodeSet as they can damage RNA, DNA and fluorophore linkages; however, CodeSet performance is generally robust and may not be dramatically affected. For example, if the -80°C fails, the CodeSet will likely maintain full functionality after brief storage at 4°C or even room temperature. Customers have gotten robust data from Codeset that was at RT for 8 hours, however these results are not guaranteed. If wanting to use a compromised codeset, it’s recommended to test with samples that have been run before on pristine codeset for comparison.
If the CodeSet is stored at -20°C, it may display some loss in signal after 1-2 months. This could introduce technical variance to the experiment thus, it is recommended to reorder a new CodeSet.
Max/Flex Cartridge: Improper storage of the Max/Flex cartridge
If the Max/Flex Cartridge is briefly stored at -80C° (1 week or less), it can still be used without compromising its function. Prolonged storage at -80°C will damage the streptavidin binding surface, and a new cartridge should be ordered.
Prep plates: Improper storage of the Prep Plates
Prep Plates will freeze at -20°C or -80°C and should be reordered if frozen due to damage to the magnetic beads. Prep Plates that reach room temperature during transit should be replaced and not returned to 4°C.
Sprint Cartridge: Improper storage of the sprint cartridge
Buffers in the cartridge will freeze if stored at -80°C. Also, the valve membranes will freeze and lose the ability to open and close, resulting in lane blockages. The frozen cartridge should be discarded, and a new one should be ordered.
NanoString recommends keeping the AbMix at 4°C after thawing if the remaining amount is intended to be used within 2 weeks. A single additional freeze/thaw is not predicted to affect the performance, but the best option is to avoid it if possible.
Generally, no. Bubbles will not affect the purification or the scanning of the cartridge.
Bubbles can sometimes appear in the cartridge – we believe it is related to atmospheric conditions in the lab. This is only an issue in the rare instance that an electrode ends up inside a bubble, in which case nothing will stretch in that particular lane.
Sprint Reagent C should be handled on its own as hazardous waste. This is our practice for disposing of our expired batch. However, a dilute mixture of Sprint Reagents A, B, and C should be fine for sewer discharge in the majority if not all jurisdictions. An extremely small concentration of sodium azide is not a hazardous waste concern, per se, but rather a concern of its tendency to form peroxide crystals on plumbing (which, of course, if shocked, can be explosive). However, this is easily mitigated by flushing copiously with fresh water after discharging sodium-azide-containing solutions.
In addition to the above, please refer to our SDS, and it is advisable to check their respective local sewer regulations.
No. If precipitate appears in Reagent C, it will not have any effect on the performance of the assay and will not cause any blockages of the system.
Yes. The Sprint Reagent C should be slightly cloudy; that is the normal state of this reagent. This might appear as ubiquitous cloudiness or strands of opaqueness, especially near the bottom of the bottle. Clear and cloudy bottles are both normal and fully functional.
It is possible to purchase additional strip tubes and caps separate from the Master Kit. These can be purchased directly from the manufacturer, BioPlastics/Cyclertest, Inc. (https://bioplastics.com/index.aspx), and the product information is as follows: Cap item #B56501 – EU thin-wall 12-cap strip, robust, indented flat, natural and Tube item #B56601 – EU 0.2ml thin-wall, 12-tube Strip, Extra Robust, Regular Profile, Frosted, Natural.
Reagents A & B are 500mL each. Reagent C is 125mL. (One bottle of each reagent is enough for 16 runs.)
Visible variation in the thickness of our cartridges is expected and will not impact assay results. In the extremely rare case that cartridge thickness prevents proper loading into the digital analyzer, please contact email@example.com for further assistance.
- Labeled antibodies are tested against unlabeled antibodies for cross-reactivity to samples run on the panel of interest.
- The optimal concentration for use is determined by titrating the antibody near our recommended amount.
- To evaluate specificity, test the antibody with positive and negative control samples.
- In some cases, it may be desirable to perform IHC on the conjugated and unconjugated antibody lots for comparison.
Sample Processing and Input
The nCounter gene expression assay can use purified total RNA, raw cell lysates in guanidinium salts, blood lysate from PAXGene™, and amplified RNA directly added to the hybridization reaction. We have also obtained good results from total RNA isolated from FFPE samples, provided the RNA meets minimal integrity requirements (more than 50% greater than 200 nt).
The nCounter Analysis system is ideal for profiling DNA, RNA, and protein in any combination simultaneously. A wide range of sample types are compatible with most of our assays, including isolated RNA, cell lysate, FFPE, and fresh frozen tissue. Please check the product specifications or contact firstname.lastname@example.org for more information on compatible sample types with your assay.
For most gene expression assays, we suggest 100 ng of total RNA or a lysate of 10,000 cells. More or less material may be used to boost signal or reduce sample requirements, but quantitation of extremely rare transcripts may be affected when less material is used in our system.
To a large extent, no. Differences in loading can be easily removed during CodeSet content normalization in nSolver. It is important to note, however, if the sample input is so low that the counts for your genes of interest drop below background, then normalization cannot compensate.
No, the use of a polyacrylamide carrier will not interfere with our assay.
If you are using purified RNA as your sample input, then 8 µl is the maximum amount recommended for an nCounter XT gene expression assay.
NanoString has determined that the purification methods for Probes A and B can be critical for optimal Elements assay performance; Probe B is best when purified by PAGE, while the purification method for Probe A is less important.
Two different purification methods are recommended because the assay is sensitive to probe cross-contamination. For example, if probe A is stored for any length of time with probe B, a very low rate of cross-linking occurs between probes even at very low temperatures. These cross-linked probes will result in higher background signal even in the absence of target RNA. Similarly, if the two probes are manufactured on the same production line (as might occur if they are both PAGE purified), then there is sufficient cross-contamination between probes to yield higher backgrounds. Thus, NanoString recommends using two different purification methods for Probes A and B to ensure that there will be no cross-contamination during manufacturing.
To ensure accurate sample temperature and to minimize evaporation risk, we advise using plastic consumables that are recommended by your thermal cycler manufacturer.
The goal of using a spike-in oligo with your miRNA panel is to introduce a known constant across all of your samples. In doing so, this internal control can account for differences in counts that would arise solely from small variations in purification efficiency from sample to sample.
All of our miRNA CodeSets are ready for you to add spike-ins, since the spike-in Reporter probes are automatically included in all CodeSets. If you do not add spike-ins, these reporter probes do not detect any target sequences and will appear as having no counts upon analysis. If you wish to add spike-ins, refer to the Gene List Excel file for your panel, and scroll to the bottom to see a list of spike-in sequences. Order the oligos that match the spike in Reporters.
When performing your spike-in experiment, add the spike-in oligos after lysis, but before purification and extraction. Please refer to our Tech Note for more details.
Yes. Lysates from primary isolated cells or cultured cell lines can be used in a nCounter GX hybridization without further RNA purification. The maximum sample input volume when using cell lysates depends on the type of lysis buffer used.
Nanostring has developed recommendations specific to primary isolated cells and tissue cultured cells.
More details are provided in protocol MAN-10051-03: Preparing RNA and Lysates from Fresh/Frozen Samples.
Yes. Tissue lysates can be used in a nCounter GX hybridization without further RNA purification. Best results are obtained if the lysis and homogenization is done in a concentrated guanidinium salt-based buffered solution. Many manufacturers’ RNA purification kits use a chaotropic guanidinium salt-based lysis step to break open cells and inactivate nucleases as a first step. Once completed, the lysate can be stored and used in a nCounter assay. If Buffer RLT is used for lysis, the maximum recommended input volume into the hybridization is 1.5µL.
NanoString recommends a minimum of 5,000 (for SPRINT) to 10,000 (for MAX and FLEX) cell equivalents per nCounter XT hybridization reaction for most applications. The required number of cells for any given application will ultimately be dependent on the abundance of the mRNA targets of interest in the tissue sample to be assayed. As a guidance for the amount of RNA to be used in hybridization for a given sample type and codeset combination, refer to the binding density QC parameter in the nSolver User Manual. The binding density, reported in barcodes per square micron, responds almost linearly to the amount of sample RNA introduced in the hybridization within the optimal range specified for your instrument.
You do not always need to run technical replicates with gene expression assays; however, if you are a new user, it would be worthwhile to run technical replicates initially, as it will allow you to gain confidence in the technology. Biological replicates, however, should always be run whenever possible.
NanoString automatically includes probes against fourteen ERCC transcript sequences in every CodeSet. Six of these sequences are used as positive hybridization controls; the corresponding synthetic RNA targets are present in the CodeSet as well. Eight of the probes are used as negative controls, and the corresponding target transcripts are absent. Collectively, these internal controls allow you to determine the hybridization efficiency and non-specific background in your experiment. It is up to you to decide whether you require additional experimental or biological controls in your experiment.
It is unlikely, as capture and reporter probes are present in excess. However, the formulation itself may be compromised if the probes are diluted or if the total volume added to a hybridization reaction is altered from what the user manual recommends.
You may see a moderate increase in counts associated with longer hybridization periods. However, this difference normalizes away during data analysis.
Typically, the incubation conditions during sample hybridization and processing do not lead to DNA denaturation. Therefore, any DNA present in your sample should remain double-stranded and invisible to the probes, which need single-stranded targets to hybridize.
DNA contamination may interfere with your assay if it causes an overestimate of quantitation of the RNA concentration in the sample. To avoid this, follow the steps below when preparing your sample. Please contact email@example.com for further assistance with sample preparation.
- Use an RNA quantification method that discriminates DNA from RNA, such as QuBit.
- For overestimations of several fold (as opposed to several log), careful data normalization may be able to remove bias from your dataset.
- If possible, increase the amount of RNA from the DNA-contaminated sample as long as total input remains below critical levels of saturation. If you observe that your DNA-contaminated samples lead to binding densities below 1.0, you can safely increase the sample input to boost the counts instead of relying on normalization alone. Contact firstname.lastname@example.org for more assistance with determining your ideal input amount.
If you are using a pre-made CNV panel, we strongly recommend using AluI to fragment your DNA as we have verified that the AluI sequence is not present in any of the target sequences in the panel.
In addition, selecting an alternative restriction enzyme may impact your total number of counts, as the size of the fragments influences the binding kinetics of the probe to its target. Smaller fragments will exhibit faster hybridization kinetics and may result in more counts even if the number of molecules is the same, conversely, bigger fragments, as the ones you would obtain with EcoR1, may result in fewer counts.
If you are creating your own custom CNV panel, ensure that the restriction enzyme you select is not present in your panel target sequences. You may also wish to validate your experiment for the average fragment size you obtain with the restriction enzyme of your choice.
If you wish to concentrate your RNA before performing an nCounter assay, NanoString recommends using a commercially available RNA concentration kit. These typically work to both concentrate your sample and reduce impurities that could negatively affect your assay results.
NanoString cannot guarantee the performance of a CodeSet if it has not been stored at the recommended temperature. These reagents should be used at your own risk.
The nCounter gene expression assay can use purified total RNA from fresh frozen samples (25-100 ng), as well as purified total RNA from fragmented or FFPE samples (input will vary with fragmentation profile, see FFPE MAN-10050 and contact email@example.com for more details). We have also obtained good results from amplified RNA from frozen or FFPE samples (1 or 10 ng pre-amplification, respectively, see low input MAN-10046), as well as from raw lysates of cell suspensions (lysis buffer and cell numbers will vary by cell type, see MAN-10051 and contact firstname.lastname@example.org for more details).
Most NanoString assays require relatively modest total RNA concentrations (10-20 ng/µL for mRNA, 16.5-33 ng/µL for miRNA). Nevertheless, although total yields for some samples may be sufficient, sometimes the concentration of the samples still does not meet these standard guidelines. In other situations, the use of highly degraded RNA from (e.g.) fixed samples may require higher concentrations. Thus, it may sometimes be useful to concentrate RNA further to obtain sufficient mass for use within the sample volume limitations of an nCounter assay.
In general, there are three approaches one can use to concentrate RNA: column concentration, precipitation, or evaporation. There is not one universally ideal approach, as the best method will be dependent on the sample source and the assay being performed.
The advantage of column concentration is that sample impurities like salts or organic solvents which may have been inadvertently carried over in the sample during the purification process will be removed. Consequently, such columns are generally the preferred method of concentration for nCounter miRNA assays, as these are more sensitive to sample impurities (the miRNA assay has an enzyme-driven ligation step). Columns for concentration can be purchased from one of several vendors. Two columns which we recommend for miRNA assays (required for biofluid samples like serum or plasma) include one from Amicon (a size exclusion column for proteins but also works for miRNA: Millipore #UFC500396) and another from ZymoResearch (RNA Clean & Concentrator #R1015). The disadvantage of using a column concentrator is that there will likely be some loss of total RNA mass. Therefore, for some samples with limited volumes or limited initial RNA mass, there may be little improvement in final concentration.
Precipitation of total RNA using a co-precipitator like linear acrylamide or glycogen can also be an effective method to concentrate RNA. Recovery yields can be quite good even with low RNA quantities. However, the procedure is more labor-intensive, requires handling of toxic compounds, and may still result in some impurities being carried over into the final precipitate. Therefore, this may not be ideal for nCounter miRNA assays run with biofluid samples where target concentrations are low and assay efficiency needs to be maximized.
The use of a Speedvac is another option for concentrating RNA samples. After evaporation, the sample can be re-suspended in the appropriate volume, though whatever impurities were in the original sample will also be concentrated within the final sample. This method would therefore only be recommended for samples with high purity specs (260/280 and 260/230 ratios of 1.8 to 2.2), or for samples intended for use in any of the nCounter enzyme-free assays (that is, neither the miRNA assay nor any assay that includes a pre-amplification step).
The goal of using a spike-in oligo with your miRNA panel is to introduce a known constant across all samples. This internal control can account for differences in counts that would arise solely from small variations in purification efficiency from sample to sample. Generally, spike-ins are not required if already using a robust normalization strategy such as the top 100 method as we recommend for samples consisting of tissues or cells.
All of our miRNA CodeSets are ready for you to add spike-ins, since the spike-in Reporter probes are automatically included in all CodeSets. If you do not add spike-ins, these reporter probes simply do not detect any target sequences and will appear as having no counts upon analysis. If you wish to add spike-ins, refer to this link, and under “Product Information”, select the Gene List Excel file for the appropriate panel, and scroll to the bottom to see a list of spike-in sequences. Simply order the oligos that match the spike in Reporters.
When performing the spike-in experiment, add the spike in oligos after lysis, but before purification and extraction. Please refer to our tech note here under “Support Documents” for more details.
Is this still okay to run?
- If the volume is low, but not gone: Yes
- If there is nothing left but thick residue or crust: No
If the volume is low, but not gone, then the evaporation was slow enough that the hybridization itself had time to occur. Before the sample is drawn out of the tubes, buffer is added by the Prep Station, and then the sample is used. This buffer addition will reinstate the necessary sample volume for successful purification.
If 50% or more of the sample is above roughly 200 nt fragment length, you can load slightly more RNA as a standard assay (150 instead of 100 ng total RNA) and get very similar raw counts. Once that fraction drops below 50%, you’ll need to add increasing amounts of RNA to get equivalent counts to a fresh frozen sample. There are ways to precisely calculate exactly how much one should add given a certain fragmentation level, but some educated guessing based on RNA fragmentation traces will also work:
|% OF RNA ABOVE 200 NT||INPUT APPROXIMATION EQUIVALENT TO FRESH FROZEN SAMPLE|
|< 3 %||Unlikely to yield data|
This approach depends on how concentrated the samples are-it may be necessary to concentrate the sample to achieve some of these higher input amounts and still be within the maximum sample input of 5µL. See Question 1 for details.
Typically, we recommend 100ng of input for FFPE-derived miRNA samples since the smaller targets from degraded samples generally have much less degradation than the larger rRNA and mRNAs. For extreme cases of degradation, one might need to increase input, but in most cases, it isn’t necessary. Most users can use the same input amount as fresh samples and then adjust if needed after the first cartridge.
The more common challenge for miRNAs and FFPE samples is slice thickness. Below ~ 8 µm, miRNA targets can leach out of samples during deparaffinization steps. Therefore, we usually recommend slices of at least 10 µm to prevent this leaching effect.
The Solid Tumor Lysate Panel sample preparation protocol (GLPCP_PM0005_PB_nCounter Vantage 3D RNA_Protein Solid Tumor Assay) was recently developed to expand the nCounter-compatible sample types to typical tissue lysates used in ELISA and Western analyses. Briefly, tissue or cells are homogenized in a buffered SDS/βME buffer and boiled to denature the proteins and tightly associate SDS with the resulting peptides. A small volume is removed for RNA. The Solid Tumor Lysate Protein protocol next removes the excess SDS using a Pierce Detergent Removal spin column. After setting aside a small aliquot for RNA counting, the resulting sample is typically diluted to an appropriate concentration for binding to a charged plastic microwell plate (please see the RNA:Protein Solid Tumor Lysate Manual, MAN_10053 Product Manuals).
The recommended SDS/βME lysis buffer recipe is 100mM Tris pH 6.8, 2% SDS, 50mM DTT. Tris serves as a buffer to maintain an acidic pH. βME is used to reduce intra- and inter-peptide disulphide bonds thereby denaturing proteases, phosphatases and nucleases. SDS becomes tightly associated with the peptides after boiling to normalize the charge characteristics of the peptides, assisting binding to the charged plastic. Therefore, any lysis buffer that serves a similar function can be used to homogenize the tissue or cells. If the resultant sample has not been reduced and denatured with SDS/βME, this step is strongly recommended. However, it may not be required for all buffers and samples. If it is important to avoid the boiling step or to use a detergent different from SDS, the requirement for denaturation should be tested empirically. Removal of excess detergent is important for efficient plastic binding, so it is strongly recommended that all samples get cleaned up with the Pierce Detergent Removal spin column prior to plate binding. Buffers often used for Reverse Phase Protein Arrays (TPER (Pierce) and Triton X/HEPES buffers) are compatible with the Solid Tumor Lysate protocol, both with and without SDS/βME treatment. As more lysis buffers are successfully used with the Solid Tumor Lysate protocol this document will be updated. Any buffer tested and found to be incompatible will also be noted.
Finally, for any RNA:Protein protocol it is important to maintain an acidic or physiological pH (7.2-7.4) in the lysis buffer to generate and store the RNA-containing lysate. Alkaline hydrolysis is a very efficient RNA degradation reaction, even at room temperature. If your lysis buffer, or sample storage buffer has an alkaline pH it is critical to acidify or neutralize this solution as soon as possible after lysis.
What if I only have 20-40ng of RNA? Can I still run the nCounter Gene Expression Assay?
NanoString offers a “Low RNA Input Kit” that can be used with as little as 1ng of RNA and is available for most of our off-the-shelf panels. Please use the link to learn more about it Low RNA Input Kit.
Can I use Exosomes in the nCounter miRNA assay?
Yes, NanoString has a white paper that details exactly how to use exosomes in our miRNA assay. Please see the Support Documentation page and look under Tech Notes and Whitepapers > miRNA and CNV to download Applications with Exosomes and Extracellular Vesicles in miRNA Research.
Our low-input kit is designed for RNA applications and uses a targeted amplification approach with a low cycle number of amplification steps and primers designed to flank the region of probe binding. We currently do not have a protocol supporting a low input approach with the CNV assay which is a DNA application. Due to the varying efficiencies of primer binding and enzymatic activity, more subtle copy number amplifications or deletions may be difficult to accurately and reproducibly quantify.
NanoString has developed a proprietary assay design engine for probe design. The design engine contains algorithms that interrogate each target sequence in sequential 100 nucleotide windows, shifting along the target sequence one nucleotide at a time. The algorithm scores the 100-nucleotide sequences on a variety of sequence characteristics (hybridization efficiency, GC content, Tm, secondary structure, etc.) to identify target regions that fall within our ideal design parameters. Probe design rules include no more than 85% sequence homology between sequences in order for probes to be discriminatory and no more than 16 consecutive nucleotide matches.
The design engine screens all potential probe pairs against the target organism’s transcriptome to ensure specificity and by default we bias the final selected probe pair towards a region common to all transcript variants for each target gene whenever possible. In some cases, it is possible to design probe pairs that distinguish different transcript variants or target specific regions of a gene although this is dependent upon the specific sequences within each region.
The target sequences and associated probe pair data are included in a CodeSet Design Report that is sent to the customer for review and approval prior to the start of the manufacturing process.
The miRNAs included in the nCounter miRNA Expression Assays have been curated to ensure that only biologically significant miRNAs are included in the panel; the nCounter Human v3 miRNA panel contains a probe for all miRNAs that are denoted in miRBase 21 as “high confidence”. In addition, NanoString applies a set of proprietary metrics such as observed read ratios and expression analytics to screen potential targets prior to inclusion in the panel. NanoString also performs a scan of the current literature to ensure that only actionable and clinically relevant miRNAs are included in the miRNA panels. Altogether, each miRNA panel contains a comprehensive set of miRNAs that are biologically significant and ideal for targeted discovery and validation experiments.
Working with xenograft RNA is a particular strength of NanoString technology. We have extensive experience with multi-species designs, particularly the mouse-human xenograft tumor model. Designing a CodeSet that targets mRNAs from each species is relatively straightforward; we simply check all transcriptomes likely to be present in the reaction for cross-hybridization. In addition, our nSolver software allows you to create custom annotations for each targets, so it is easy to assign them to different pathways or species for downstream analysis.
Yes. By default, NanoString will design your probes to recognize as many isoforms of the gene as possible. However, if you would like to identify splice variants, it is possible to design multiple probes for one gene. Each splice variant will count as one “gene” in your final gene list. Please contact email@example.com for more information.
In all of our probe designs across organisms, we design to what is considered the reference sequence by NCBI unless otherwise specified in a custom project. The NCBI mouse reference sequence is from C57BL/6J mice. Our probes are robust to small changes in the actual target sequence, which makes them insensitive to most variation from the reference sequence that exists between strains. To determine how well a probe will work against a variant target, we compare the percent identity of the original, targeted sequence to the variant (using BLASTn). Any target with a percent identity of 95% or greater is likely to be targeted at similar efficiency to the intended sequence. Thus, in order for probe efficiency to be significantly altered, there would have to be more than 5 bases that differ between the reference and the alternate strain.
The miRNA assay is designed to digitally quantify mature miRNA. It cannot detect primary miRNA (pri-miRNA) or precursor miRNA (pre-miRNA). The assay includes a step to ligate an oligo (called a miRtag) to the 3’ end of mature miRNA molecules in order to provide the capture and reporter probes a sufficiently long molecule for hybridization. This lengthening with the miRtag is required because the NanoString probes hybridize to approximately 100 consecutive nucleotides and mature miRNA is typically 22 nucleotides in length. In addition, the double-stranded nature of primary and precursor miRNA and their inaccessible 3’ ends interfere with the ligation reaction. The ability to specifically detect only mature miRNA allows direct quantification of the molecules directly involved in gene regulation. For more information on the miRNA assay, please refer to the miRNA Assay Manual.
NanoString chemistry is based on hybridizing anti-sense capture and detection probes to a 100bp segment of the target of interest. Fragment size of the nucleic acid sample can modulate the efficiency of this reaction under certain conditions.
nCounter technology requires that RNA or DNA is sufficiently fragmented to allow for hybridization of the capture and detection probes. For optimal performance, we recommend that at least 50% of the sample be fragments 200 nt or larger, as determined by Agilent Bioanalyzer®. Total nucleic acid input may be increased according to fragment size distribution, per our Tech Note, if the sample is highly degraded.
The optimal upper limit of nucleic acid fragment size is 800 bp, and counts will decrease linearly as the percentage of fragments above 800 bp increases. For mRNA and lncRNA targets, these are susceptible to nicking during hybridization reaction, and ultimately most fragments will be reduced in size so that the percentage of fragments over 800 bp is negligible and will not interfere with the efficiency of the assay. DNA targets, however, are stable at 65°C, and will not reduce in size throughout the hybridization reaction. It is therefore critical that DNA samples be thoroughly fragmented by AluI digestion or sonication.
Yes. It is possible to design probes to distinguish between precursor mRNA and mature mRNA sequences. Introns and exon/intron junctions can be used to detect precursors, whereas exon/exon junctions can be used to detect spliced mRNAs. Simply tell us what you would like to detect and we will design the necessary probes.
NanoString probes are designed to span exon-exon boundaries so that only mature mRNA species are counted. If you wish to detect pre-spliced transcripts, simply submit a request to firstname.lastname@example.org and we can design the correct probes to fit your needs.
Yes, internal reference and housekeeping genes are used synonymously in our literature. Internal reference genes are a subset of genes within your CodeSet that have low variability across sample types and high counts, regardless of their function. Usually these genes play generic, non-cell specific roles, such as in metabolism.
The Ligation Positive and Ligation Negative control sequences are pieces of ERCC (External RNA Control Consortium) sequences that were trimmed to have profiles similar to natural miRNAs. These sequences were tuned and refined by NanoString to perform robustly with our miRNA assay.
NanoString recommends keeping the AbMix at 4°C after thawing if the remaining amount is intended to be used within 2 weeks after the first use.
NanoString recommends storing your extra lysates at -80°C using strip tubes with a secure evaporation-resistant seal between the cap lid and the tube. Standard 12-well PCR tubes with domed lids are preferred and can be used for both boiling and storing your lysates.
We are partnering with leading antibody vendors to provide content that is of the highest quality.
We have a robust validation process that uses cell-based model systems known to differentially express specific antigens and NanoString data is compared with flow cytometry for cell suspension-based assays, western and ELISA for lysate-based assays, and IHC for FFPE-based assays. For more information, please see our webinars with BioLegend and CST.
This depends on the abundance of the antigen of interest and the antibody used in the panel. For abundant proteins, for example, HER2 in breast cancer cells, we have demonstrated internally that we can detect as few as 1% HER2 expressing cells in the background of 99% non-expressing cells, which translates to a LOD in the very low to sub pictogram range.
In reproducibility experiments, we see R2 values of 0.9 or above.
The protein panel includes controls for normalization. nSolver will allow you to customize your method of normalization.
RNA and protein preparations are separated and the protein sample is diluted prior to combining with RNA in the hybridization-based detection assay.
Advanced analysis has the ability to generate scatter plots with protein and RNA data.
Denaturation of protein lysates is critical for optimal assay performance. This step ensures the DNA oligo tags on the antibodies in the mix are in the correct conformation for the hybridization.
Our RNA:Protein protocols are designed for ease of use so all samples are isolated upfront and to ensure the lowest sample input. If other methods of RNA preparation are preferred, they should be compatible with RNA:Protein analysis.
We can measure targets in all three compartments. Note that each panel is designed for specific content based on the sample types profiled and the antibodies included. Like all antibody-based protein detection methods, it is important that epitopes are in the correct conformation for detection by a specific antibody. Our panels are designed with this in mind and include antibodies that are compatible for a specific sample type.
The antibodies in the nCounter Vantage 3D Immune Cell Profiling Panel measure cell surface markers only.
The antibodies in the nCounter Vantage 3D Solid Tumor Assays detect proteins in all three compartments.
Current protocols and antibodies included in our panels are compatible with fresh cell suspensions (cell lines or primary cells such as PBMC), cell or tissue lysates, and FFPE. Make sure to check our product page and protocols for details about sample type compatibility.
As with all nCounter assays, input to the nCounter® Sprint is half that of input to nCounter MAX or FLEX. This dilution is addressed in the manual and the workflow is otherwise the same.
Yes. A standalone protein-only kit is available for purchase indicated by the (D), which contains ERCC controls.
Protein reagents in the multi-omics kit should not be used as protein only. Assay controls are provided in the RNA portion of the assay.
Vantage 3D Protein (R) panels allow for protein analysis to be spiked-in to the following RNA panels: PanCancer Pathways, PanCancer Progression, nCounter Vantage RNA, nCounter Vantage Fusions, and Custom CodeSets (note that you must run a no template control when using Protein (R) with Custom CodeSets to determine background levels).
NanoString provides a Protein Barcoding Service that allows for three additional proteins of interest to be added to any protein panel.
Additional charges apply for customization and custom antibodies can be tested for cross-reactivity with the protein panel of choice using the optional cross-reactivity Testing service.
Note that custom antibodies must be provided as 1mg in 1xPBS and cannot be functionally validated by NanoString.
Yes, we also offer custom protein assay development (<10-plex) with our Vantage 3D Protein Barcoding Service. Custom assay development requires additional TagSet reagents for readout on the nCounter platform.
Customers interested in custom assay development should ensure all antibodies of interest are validated for the workflow similar to the one they intend to use on the nCounter platform (i.e. flow antibodies for cell suspension assays, western antibodies for lysate assays, and IHC antibodies for FFPE assays). Customers wishing to design custom protein panels should decide what positive and negative control antibodies work for their system and include these in the custom panel.
Antibodies barcoded with the Protein Barcoding Service will undergo QC to confirm labeling and concentration. However, we cannot validate biological activity as NanoString does not have the unique epitopes required for this type of validation. However, our cross-reactivity test can ensure your custom barcoded antibody does not interfere with the protein panel you plan to use with your custom content.
Both Dextran Sulfate and Salmon Sperm DNA are used to decrease non-specific binding of antibody-oligo conjugates.
NanoString’s antibodies are conjugated to DNA oligos, and these negatively-charged oligos could bind to the cells or to the plate surface, resulting in a false positive signal. NanoString recommends blocking the plate and cells with Salmon sperm DNA, which binds to the “sticky” places on the cells and plate and prevents any additional reporter oligo DNA from binding to non-specific targets.
Dextran Sulfate is a highly negatively charged polymer and decreases non-specific background signal as well. Non-specific binding may be decreased by lowering charge-based interactions or increasing viscosity of the solution.
For the Vantage 3D Immune Cell Profiling Assays, Buffer W included in the Universal Cell Capture Kit – Cell Surface compatible serves the same purpose.
Cell Suspension-Based Assays (i.e. Immune Cell Profiling)
The main concern with this sample type is high levels of hemoglobin that might remain. The best route is to isolate white blood cells if possible. Please ensure that cells are viable if using this sample type as the starting material.
Some automated cell counters can falsely inflate the true PBMC cell number in a sample. NanoString recommends either manually counting your PBMC with a hemocytometer, or determining a scaling factor for your particular cell counter.
Diluting your cells in 3% glacial acetic acid will help to remove red blood cells or extremely damaged cells, resulting in a more accurate count of live and healthy PBMC. It is recommended, but not required when counting your cells.
No, using Heparin or EDTA will not interfere with the RNA or Protein portions of the protocol. Simply follow the protocol as outlined in the manual to wash the cells with PBS before beginning the assay. Once the cells are washed, incubated with blocking buffer, antibody, and washed again, there is unlikely to be any Heparin or EDTA left in your samples.
In experiments run internally, Pearson correlations in protein abundance between flow cytometry and the nCounter protein assay are between 0.7 and 0.9 for 15 antibodies measured across 3 cell lines. Changes in relative expression are also consistent between platforms.
Flow provides co-expression measurements of the different types of proteins expressed on the cell surface. RNA:Protein analysis on nCounter provides aggregate measurements of RNA and protein from your sample and does not distinguish which cell the measurements come from unless the presence of a particular protein is definitive in identifying the presence of a specific cell type in an aggregate sample. The Vantage 3D RNA:Protein Immune Cell Assays are an ideal complement to flow cytometry when sample limitations prevent flow analysis or for profiling protein expression downstream of flow sorting cells to isolate rare cell populations.
The current protocol for detection of cell surface proteins from cell suspensions has been optimized for an input range of 20,000 – 50,000 cells. Cell input recommendations depend on the type of sample, i.e. cell lines require as little as 20,000 cells, whereas PBMCs and other primary cell suspensions may require up to 50,000 cells to generate robust expression profiles. More information can be found on individual product pages.
For the Immune Cell Profiling Panel, no. The targets in this panel are all extracellular, so you will not need to fix the cells.
Universal Cell Capture Beads are magnetic beads conjugated to the B2M antibody, which is a target expressed on all nucleated cells. This reagent allows for immobilization of cell suspension samples during sample processing.
After cell counting, assay time is ≤5 hours to the overnight hybridization step. As there are several incubations, hands-on time is ≤ 2 hours.
It is important to spin the Universal Cell Capture Beads prior to opening the vial as the aliquot provided is only sufficient for 14 reactions. If brief centrifugation is required to remove beads stuck to the tube cap, ensure that the beads are mixed by pipetting prior to proceeding with the assay to ensure accurate pipetting.
Both flicking and pipetting work to remove liquid from the plate after washes, incubations, etc. Both methods provide the same data quality but note that removing residual liquid after flicking/blotting is not always necessary (unless specified), whereas buffer removed by the pipetting method requires removal of as much of the residual buffer as possible to avoid leaving variable amounts of remaining buffer in the wells. Failure to do so may result in poor quality data.
During incubations and washes, small amounts of residual liquid are acceptable, as long as it is equivalent across all rows of the plate. Prior to addition of Buffer LH ALL buffer must be removed.
Once cells are bound to the Universal Cell Capture Beads, RNA and Protein processing can be done sequentially or in parallel. Since Antibody incubation takes 30min – 1hr, we suggest proceeding to this point and completing the RNA sample preparation while waiting. Ensure that cells are not left in Buffer W for extended periods of time.
Because of the sensitivity of the NanoString assay, it is critical to follow the washing procedure as described to ensure unbound antibody is removed from your sample.
Bubbles should be avoided at all steps. We recommend setting your pipette volume to half the sample volume for all wash steps.
Lysate-Based Assays (i.e. Solid Tumor for lysate)
Due to the lysis buffer composition, we find that the recommended 660nm kit provides the most accurate protein quantification. Utilization of other quantification methods may result in inaccurate protein concentrations and subsequent poor data quality.
SDS removal is critical for optimized binding of antigens to the plate.
We have not validated other lysis buffers besides the ones listed in MAN-10033 and MAN-10034. Please contact email@example.com for the most up-to-date information on additional buffer compatibility.
Our protocol guidelines are meant to ensure ease of preparation. Please adjust volumes as necessary for your specific experiment.
Both the SDS lysate or the lysate with detergent removed can be stored at -80°C in aliquots to reduce the number of freeze thaws.
The SDS in the lysis buffer and boiling should be sufficient to inhibit all endogenous enzymes.
In experiments performed internally, changes in relative expression are consistent between platforms and show high correlation. For detailed information, please see our product bulletin.
The current protocol requires final protein concentration of 0.25 mg/mL (~4 µL) for RNA:Protein Assays and 5 μg/mL (50 µL) for Protein only assays. We recommend a starting protein concentration of 0.5 – 1.5 mg/ml to ensure sufficient sample after detergent removal.
After determining the protein concentration, assay time is ≤7 hours to the overnight hybridization step. As there are several incubations, hands-on time is minimal.
The assay has been validated for manual processing (MAN-10035 and MAN-10036). Please contact firstname.lastname@example.org for the most up to date information on our guidance for autostainers.
FFPE-Based Assays (i.e. Solid Tumor for FFPE)
Our assay was validated using a citrate-based antigen retrieval method.
Any UV source between 300 – 350nm should be sufficient. It is important to standardize your UV illumination method to ensure consistent cleavage between runs.
In experiments run internally, changes in relative expression are consistent between platforms and show high correlation. For detailed information, please see our product bulletin.
A single 5 µM slice is sufficient for protein only analysis. An additional slice is required for nucleic acid quantification as the antigen retrieval does not allow for downstream nucleic acid quantification.
The workflow is very similar to standard IHC techniques and results in data in under 72 hours. As there are several incubations, hands-on time is minimal.
No. Bulk measurements across your FFPE sample will be provided. To learn more about our Digital Spatial Profiling technology built on the same core protein technology, please visit our DSP technology page.
Data Analysis and Normalization
The nSolver Analysis Software is a data analysis program that offers nCounter users the ability to quickly and easily QC, normalize, and analyze their data without having to purchase additional software packages. The nSolver software also provides seamless integration and compatibility with other software packages designed for more complex analyses and visualizations. It is free for all nCounter customers and available for download. Please consult the nSolver User Manual for instructions on how to analyze your data using this software or watch our video tutorials. For additional assistance, please contact email@example.com.
The positive controls are spike-in oligos used for quality control. The positive control counts in each sample are influenced by a number of factors: pipetting accuracy, hybridization efficiency (e.g. inaccurate temperature or presence of contaminants from sample input that inhibit hybridization), as well as sample processing and binding efficiency.
Positive controls serve three general QC purposes:
- Assess the overall assay efficiency. nSolver raises a warning flag when the geometric mean of positive controls is >3 fold different from the mean of all samples.
- Assess assay linearity. Decreasing linear counts are expected from POS_A to POS_F.
- Assess limit of detection (LOD). It is expected that counts for POS_E will be higher than the mean of negative controls plus two standard deviations.
Some level of variability among positive control counts is expected. If you receive no positive/negative control QC flags in nSolver, you may rest assured that the assay worked as expected. Even if you do receive warning flags, it does not necessarily mean the assay has failed. You may send your RCC files to firstname.lastname@example.org, and we will be happy to check for root cause of the flags for you.
The total surface area of each lane in a cartridge is scanned in multiple discrete units called fields of view (FOV). After scanning is complete, the FOV within each lane are aggregated together to generate total counts across the entire surface area within each lane. The “Imaging QC” metric quantifies the performance of this imaging process. Specifically, it is a fraction that is calculated by dividing the number of FOVs that have successfully been scanned (i.e., “FOV Counted” within nSolver) by the number of FOVs that were attempted to be scanned (i.e., “FOV Count” within nSolver). Significant discrepancy between the number of FOV for which imaging was attempted (“FOV Count”) and for which imaging was successful (“FOV Counted”) may indicate an issue with imaging performance.
Within nSolver, a sample that has an Imaging QC value less than 0.75 (or 75%) will be flagged. The threshold of 0.75 was selected based on internal testing that evaluated performance over a range of FOV values. The scanner is more likely to encounter difficulties near the edge of the slide. Therefore, when the maximum scan setting is selected for MAX or FLEX systems (the SPRINT instrument has one scan setting), it is more likely that some FOV will be dropped. Reduction in number of FOV counted does not compromise data quality and is accounted for during data normalization. However, when a substantial percentage of FOVs are not successfully counted, there may be issues with the resulting data. Consistent large reductions in percentages can be indicative of an issue associated with the instrumentation.
If Imaging QC is greater than 0.75, then a re-scan may be performed, if desired, in attempt to increase number of FOV counted, though as a routine practice this is not necessary or recommended. If Imaging QC is less than 0.75, then clean the bottom of the cartridge with a lint-free wipe, and re-scan the cartridge, being sure that the cartridge lays flat in the scanner. Please note that the re-scan option is currently available for MAX and FLEX systems only; it is not available for the SPRINT system (as of October 25, 2016). If re-scan does not improve imaging performance in samples with Imaging QC less than 0.75, then email the raw data (RCC files) and instrument log files to email@example.com. The data and logs will be examined for hardware or assay problems.
A QC flag does not necessarily mean that data from a flagged lane cannot be used. The thresholds for QC flags are set at a conservative level in order to both catch samples which may have failed, and also to identify samples with usable data which happened to experience a reduction in assay efficiency.
To determine whether a QC flag is indicating a critical problem, examine the raw and normalized data and check whether the flagged samples have a poorer limit of detection for low count transcripts when compared to non-flagged samples. For some genes, differences in expression level between samples will be caused by differences in treatment or pathology, so it may be more appropriate to determine if the expression of only the low count genes for any flagged lane falls within the range of expression values observed across a number of unflagged samples which come from different treatments or pathologies.
One can approach this potential limit of detection question in a number of ways. First, a simple visual scan of the data may suffice to detect problems in the flagged samples. This can be performed on raw data which have been background subtracted in nSolver to identify targets that are below the background. Alternatively, outlier samples could be identified by generating a heat map of normalized data from all samples to see if the flagged samples in question are strongly divergent from other samples with similar pathology. Another option would be to examine the calculated QC metrics within nSolver (right click or command click on one of the table columns in the raw data table, and choose ‘select hidden columns’). If these QC metrics have only exceeded the threshold by a very small margin (i.e., the FOV registration is 74% instead of 75%), then the resultant data are generally going to be quite robust and usable.
More details on QC flags can be found in the nSolver manual. If QC flags become more than a rare anomaly, we encourage you to contact our support team (firstname.lastname@example.org and/or your local Applications Scientist) in order to assist you in tracking down the root cause of these potential problems with the assay consistency.
A positive control normalization flag indicates that the POS controls for the lane (sample) in question are more than three-fold different (greater or smaller) than the POS control counts from the other samples in the experiment. High POS control counts are rarely problematic, so a flag usually only indicates a problem when the POS controls are particularly low for a sample. Such low POS counts are indicative of relatively low assay efficiency at capturing and counting targets, which may lower sensitivity or introduce bias into the assay.
To determine whether a POS control normalization flag is indicating a critical problem, examine the raw and normalized data and check whether the flagged samples have a poorer limit of detection for low count transcripts when compared to non-flagged samples. For some genes one should anticipate differences in expression level between samples due to differences in treatment or pathology, so it may be more appropriate to see if the expression of the low count genes for any flagged lane falls in the range of expression values observed across a number of unflagged samples which come from different treatments or pathologies.
One can approach this potential limit of detection question in a number of ways. First, a simple visual scan of the data may suffice to detect problems in the flagged samples. This can be performed on raw data which have been background subtracted in nSolver to identify targets that are below the background. Alternatively, outlier samples could be identified by generating a heat map of normalized data from all samples to see if the flagged samples in question are strongly divergent from other samples with similar pathology. Another option would be to examine the calculated POS control normalization factors within nSolver (found in the normalized data table on the far right). If these factors have only exceeded the threshold by a very small margin (i.e., the POS control normalization factor is 3.2), then one can usually assume that the resultant data are generally going to be quite robust and usable for the majority of data sets.
More details on POS control normalization flags can be found in the nSolver manual. If POS control normalization flags become more than a rare anomaly, we encourage you to contact our support team (email@example.com and/or your local Applications Scientist) in order to assist you in tracking down the root cause of these potential problems with the assay consistency.
A QC flag for content normalization indicates that the flagged sample had a content (or housekeeping gene) normalization factor more than 10-fold different from the average sample in the same experiment. In other words, the flagged sample had significantly lower or higher counts in the Housekeeping genes which are used to normalize sample input. Although unusually high housekeeping gene counts would not typically be problematic, it is much more common to see samples with lower housekeeping gene counts, and these would be flagged if the content correction factor for that sample were greater than 10.
Content normalization flags can be caused by either a significant reduction in overall assay efficiency for that sample, or because of an effective reduction in quantity or quality (fragmentation) of the input RNA. The likelihood of a reduction in assay efficiency can be assessed by the presence of any other QC flags for that sample. If the lane failed the QC specifications by a large margin for any of the other QC metrics (including POS control normalization), then overall counts may be reduced enough to also cause a Content normalization flag. Essentially, in this scenario the assay is working so poorly that the counts for endogenous and housekeeping genes are dramatically reduced even if sufficient RNA targets are present. If, however, the sample had no other QC flags except that for Content normalization, this usually means that the assay is working well, but there were insufficient RNA targets to count. This can be caused either by low RNA concentrations or highly fragmented RNA, such as from an archival FFPE sample.
To determine whether a Content normalization flag is creating a critical problem, examine the raw and normalized data and check whether the flagged samples have a poorer limit of detection for low count transcripts when compared to non-flagged samples. For some genes one should anticipate differences in expression level between samples due to differences in treatment or pathology, so it may be more appropriate to see if the expression of the low count genes for any flagged lane falls in the range of expression values observed across a number of unflagged samples which come from different treatments or pathologies.
One can approach this potential limit of detection question in a number of ways. First, a simple visual scan of the data may suffice to detect problems in the flagged samples. This can be performed on raw data which have been background subtracted in nSolver to identify targets that are below the background. Alternatively, outlier samples could be identified by generating a heat map of normalized data from all samples to see if the flagged samples in question are strongly divergent from other samples with similar pathology. Another option would be to examine the calculated Content normalization factors within nSolver (found in the normalized data table on the far right). If these factors have only exceeded the threshold by a very small margin (i.e., the Content normalization factor is 10.6), then one can usually assume that the resultant data are generally going to be quite robust and usable for the majority of data sets.
More details on Content normalization flags can be found in the nSolver manual. If QC flags become more than a rare anomaly, we encourage you to contact our support team (firstname.lastname@example.org and/or your local Applications Scientist) in order to assist you in tracking down the root cause of these potential problems with the assay consistency.
Binding Density (BD) is affected by several different factors:
- The input amount. More sample input will result in an increased BD.
- The expression level of the targets in the CodeSet. If the targets in the CodeSet are highly expressed, BD will go up simply because there are more mRNA molecules present in your samples that are targeted by the probes in the CodeSet
- The size of the CodeSet. If a CodeSet contains probes for more targets, then BD will usually be higher.
A binding density refers to the number of barcodes/μm2. The recommended range is from 0.05 to 2.25 for MAX and FLEX instruments and 0.5 to 1.8 for SPRINT. If the density is less than 0.05, the instrument may not be able to focus on the cartridge due to a lack of optical information. If the density is greater than 2.25, the barcodes will begin to overlap resulting in a loss of data, as overlapping barcodes are excluded from the analysis. As a general rule of thumb, one lane can accurately detect a total of about 2 million barcodes.
NanoString provides several options for performing background subtraction using the nSolver Analysis Software. To estimate background, NanoString provides several probes in each Codeset for which no target is present. These negative controls can be used to estimate background levels in your experiment. Background levels may be estimated using either the average of the negative controls for that lane or the average of the negative controls plus a multiple of the standard deviation of all the negative controls in a lane. Alternatively, background levels may also be estimated by running a blank lane in which nuclease-free water instead of RNA is added as input; this will generate a background measurement that will estimate probe-specific background levels instead of general background levels, as estimated from a set of negative controls. Once the appropriate background level has been determined, the background counts are subtracted from the raw counts to determine the true counts.
The type of normalization strategy that you employ depends on your experiment. If you expect only a few gene targets to change, then either a reference gene normalization or a global normalization will be sufficient. However, if you expect the majority of your gene targets to change, then you should not perform a global normalization. In this case, a reference gene normalization would be most appropriate given that the group of reference genes selected is stably expressed across your experimental conditions.
Yes. The fold change data obtained from an nCounter analysis correlates well with fold change results obtained from microarray analyses. The level of concordance between the nCounter and microarray results is similar to comparisons of different microarray platforms.
Yes, we’ve found that there is an excellent correlation between nCounter analyses and qPCR analyses, both in terms of relative expression levels and fold changes. Moreover, the multiplexing capabilities of nCounter analyses increase the efficiency with which data can be obtained at qPCR levels of sensitivity. We therefore recommend using nCounter analyses to extend your current set of qPCR data.
While many mRNAs demonstrate low variance across tissues, there simply is no single set of mRNAs that can be used across all experimental conditions and tissues.
It is recommended that every CodeSet design have at least 3 – 6 “reference” or “housekeeping” targets to use for technical variance normalization. Characteristics of effective reference targets are 1) minimal variance across samples, and 2) high correlation with each other (assuming technical variance is much lower than biological variance).
If you have generated data on nCounter or other platforms previously that show certain targets do not vary across your treatment conditions, and that they fit the above criteria, these would be ideal targets to start with as reference mRNAs. However, if you haven’t characterized candidate reference targets yet, it is important to measure the expression of at least 6 – 8 candidate genes in a pilot experiment. Starting with this number of candidates should allow you to identify a set of 3 or more useful targets, as some may drop out due to higher than expected variance or biological effects across your samples and treatments.
To select candidate genes, potential reference targets can be gleaned from online reference gene tools (such as Refgenes or NormFinder), pre-existing data, or the literature in your field. Please note the reference gene tools are not affiliated with NanoString; please see the linked websites for support.
Pathway scores are designed to summarize expression level changes of biologically related groups of genes. This score can help identify pathways that are being altered by the pathology or treatment under study, and thus can help contextualize differential expression changes observed for individual genes. Pathway scores are derived from the first Principle Components Analysis (PCA) scores (1st eigenvectors) for each sample based on the individual gene expression levels for all the measured genes within a specific pathway. Although expression levels from multiple genes will generally comprise this first PC, some of these genes will have much higher weight applied to them if they capture a greater proportion of the variability in the data.
Typically, Pathway Scores will be positive for pathways containing many up-regulated genes, and negative for those containing more down regulation of expression. One can generally make direct comparisons between scores of an individual pathway across samples within an experiment (that is, a comparison of one sample’s cell cycle pathway score to another sample’s cell cycle pathway score), and a higher score for the same pathway will generally mean greater levels of up-regulation. However, comparisons between different pathways within the same experiment or across different experiments is not recommended. Moreover, because of the complexity of the calculations for Pathways scores, interpretation of these should never be performed without correlating them to other analysis results to ensure that they are placed in the correct biological context. Thus, before concluding that a pathway has been upregulated in a group of samples, it is advised to correlate the pathway level findings to the expression levels of individual genes within that pathway.
Gene Set Analysis scores are essentially an averaging of the significance measures across all the genes in the pathway, as calculated by the differential expression module. The exact calculations for these GSA scores can be found in the Advanced Analysis User Manual.
GSA scores do come in two different flavors, Global Significance Scores, and Directed Global Significance Scores. The former (GSS) measures the overall significance of the changes within a pathway and will always be positive regardless of whether genes are up- or down-regulated. The Directed Global Significance Scores are more akin to Pathway Scores as they may have either negative or positive values (for down- and up-regulated pathways, respectively). Directed or undirected GSA Scores of greater magnitude will generally indicate a stronger pattern in the pathway level expression changes, and because these scores have been scaled to the same distribution (that of the t-statistic), they will be more robust to comparisons between different pathways or experiments. A high score indicates that a large proportion of the genes in a pathway are exhibiting changes in expression across groups of samples.
Both Pathway Analysis and Gene Set Analysis (GSA) are higher level assessments of expression changes that may be occurring within related sets of genes from the same pathway. Because both scores are generated from differences in expression between samples across many genes, the scores should be roughly concordant with each other. However, differences in the way that the calculations are performed may lead to some divergence between scores, as well as differences in the interpretation of these higher-level measurements.
One important difference is that Pathway scores are generated for individual samples, while GSA scores are ‘population’ or ‘group’ level statistics and thus measure patterns between sample groups. A subtler difference is that a Pathway score uses results from only the first PC of the PCA, meaning that it can explain only some proportion of the variance in the data, which may also cause some differences when making comparisons to the GSA scores.
Notably, Pathway scores are generated from weighted expression level data, while the genes from any pathway are given equal weight in the calculation of GSA scores. The Differential gene weights in Pathway scores can allow them to detect changes that affect only a small portion of the genes in a pathway, which may be obscured in GSA if most genes in the pathway do not show significant changes in expression (that is, have a small t-statistic). Similarly, if many genes in a pathway show consistent trends in expression which are not individually significant, Pathway scores may have better sensitivity to detect these trends compared to the statistical summation approach of GSA.
In summary, comparing pathway scores directly to those from the GSA module should be performed with caution, and should always be correlated or cross-referenced with expression level changes in individual genes to ensure that biological interpretations are supported.
Parametric statistical tests operate on the assumption that the data conforms to some expected distribution, such as a normal distribution if performing a t-test. Transforming linear data into log2 values will generally satisfy this requirement, so it would be advised to transform normalized count data prior to any parametric statistical analyses.
Both the basic nSolver and the Advanced Analysis module automatically perform these log transformations in the background before performing any statistical testing, and as such all the reported p-values are based on log-transformed data. It should be noted that while data in basic nSolver are still generally displayed after being back transformed into linear space, results in the advanced analysis module (counts as well as fold changes) are displayed only in log2 space.
If performing any data analysis outside of the nSolver software, it is recommended to work from log-transformed nCounter® data.
Cell type profiling scores are generated for immune cell types using expression levels of cell-type specific mRNAs as described in the literature. For details of the selection and validation process for these markers, see Danaher et al 2016 (Gene expression markers of Tumor Infiltrating Leukocytes BioRxiv August 11, 2016).
The cell type score itself is calculated as the mean of the log2 expression levels for all the probes included in the final calculation for that specific cell type. Because the scores are dependent on probe-specific counting and capturing efficiencies, these should only be interpreted as relative cell abundance values compared to the same cell type within other samples or groups of samples. The scores should not be used as measures for the abundance of a cell type relative to other cell types within the same sample, nor should it be used to quantitate cell abundance within a single sample.
Cell type scores may be calculated as raw or relative scores. The raw cell scores will measure the overall cell abundance for that type of cell, whereas the relative cell scores measure the specific cell abundance relative to (essentially normalized to) the abundance of Tumor Infiltrating Leukocytes (TILs) in that sample. These are defined as the average of B-cell, T-cell, CD45, Macrophage, and Cytotoxic cell scores. This relative score can alternatively be customized to incorporate a baseline cell type or mixed population other than TILs.
The genes used for immune cell scoring comprise a subset of high confidence markers validated by co-expression patterns via a large survey of TCGA samples (N=9986), and confirmed by nCounter and protein analysis (Danaher et al, 2016 http://dx.doi.org/10.1101/068940). To some extent, these markers thus already represent high confidence markers for these cell types.
An additional level of quality control is by default performed within the Advanced Analysis module, whereby correlations are calculated between the expression levels of these candidate cell type markers. Those markers which do not correlate with other cell type-specific markers are discarded from the estimates of abundance. Such markers may be expressed at low levels in another cell type, or they may show highly variable expression levels within their specific cell type, in either case making the gene a poor marker for cell type abundance.
The Advanced Analysis module will also, by default, utilize a resampling technique to generate a significance level for confidence in the individual cell type scores. Cell scores with p-values below a threshold level of confidence (e.g., 0.05) would be considered higher confidence stand-alone estimates of abundance. Note that some cell scores will only ever be based on a single literature-validated cell-specific marker, and the statistical resampling method can only ever return a p value of ‘1.0’ for these scores (i.e. Tregs are only characterized by expression of the gene FOXP3). Importantly, the cell abundance levels for these and other cell type scores with p-values greater than 0.05 should not necessarily be ignored, nor should they be considered unrelated to immune cell abundance. Rather, scores without this independent confirmation should be considered hypotheses, with a confidence level based on the strength of these marker associations with cell type from the literature. The marker for Tregs, for example, is considered quite robust for this cell type and can therefore be reliably used as an estimate of Treg cell abundance, despite the single gene abundance score having a p-value of 1.0 in the software.
Multi-RLF experiments come in two different types: CrossRLF/Batch Calibration and MultiRLF Merge.
The CrossRLF/Batch Calibration option allows you to consolidate datasets of primarily distinct samples that were each run on multiple CodeSets (RLF) or on different CodeSet or reagent lots. At least one calibrator sample must be run across all CodeSets/lots. An ideal calibrator sample has robust counts (>200) for all genes of interest.
The Multi-RLF Merge option allows for data from a set of identical samples run across two or more CodeSets to be aggregated.
To create a CrossRLF experiment in nSolver, both RLFs needed to be uploaded before creating the experiment. nSolver will normalize within each RLF (i.e. each batch) followed by CodeSet calibration using the calibrator samples run in each lot. The manual goes over this process in the section entitled “Multi-RLF Experiments & Batch Calibration”, on pages 92-94
To use Advanced Analysis with a CrossRLF/batch corrected experiment starts with the “Normalized Data,” choose a Custom Analysis, and make sure to uncheck the “Normalize mRNA” box in the Normalization module options.
To create a multi-RLF Merge experiment in the Advanced Analysis module, one must first create a basic nSolver experiment for each separate CodeSet. It is critical to ensure that sample names are well annotated so identical samples can be easily matched up in the combined dataset. Since the geNorm algorithm for automatic housekeeping gene selection is not available in the Advanced Analysis module for multi-RLF Merge experiments, normalization should be finalized in the initial experiment before proceeding to the next stage. If housekeeping genes have been validated, these can be picked manually in each respective basic nSolver experiment. Alternatively, each experiment should be run separately through the Advanced Analysis module with the sole purpose of identifying the most stably expressed targets using the geNorm module. Thereafter, the basic nSolver experiment should be re-run using these selected genes for normalization.
Next, upload the RLF files for each CodeSet into nSolver.
Create a multi-RLF Merge experiment in nSolver. The manual goes over this process in the section entitled “Multi-RLF Experiments & Batch Calibration”, on pages 95-97. Here you will need to align the various nSolver files (one for each sample for each CodeSet), specifically matching up the sample names across CodeSets (panels).
Lastly, from the new multi RLF experiment, select all the samples from the normalized data table, and run them through the Advanced Analysis Module as normal. The option to normalize the data here is automatically disabled, as the software expects that the data have already been normalized according to the methods outlined above.
Data normalization is designed to remove sources of technical variability from an experiment, so that the remaining variance can be attributed to the underlying biology of the system under study. The precision and accuracy of nCounter Gene Expression assays are dependent upon robust methods of normalization to allow direct comparison between samples. There are many sources of variability that can potentially be introduced into nCounter assays. The largest and most common categories of variability originate from either the platform or the sample. Both types of variability can be normalized using standard normalization procedures for Gene Expression assays.
Standard normalization uses a combination of Positive Control Normalization, which uses synthetic positive control targets, and CodeSet Content Normalization, which uses housekeeping genes, to apply a sample-specific correction factor to all the target probes within that sample lane. These correction factors will control for sources of variability such as pipetting errors, instrument scan resolution, and sample input variability that affect all probes equally.
Note that Positive Control Normalization will not correct for sample input variability, and thus should usually be used in combination with CodeSet Content (housekeeping gene) Normalization. Performing such a two-step normalization will usually not differ mathematically from Content Normalization alone, and thus is mathematically somewhat redundant. Nevertheless, normalizing to both target classes will provide a good indicator of how technical variability is partitioned between the two major sources of assay noise (platform and sample), and thus may provide a good tool for troubleshooting low assay performance. Normalization workflows are described below.
nCounter Reporter probe (or TagSet) tubes are manufactured to contain six synthetic ssDNA control targets. The counts from these targets may be used to normalize all platform-associated sources of variation (e.g., automated purification, hybridization conditions, etc.).
The procedure is as follows:
- Calculate the geometric mean of the positive controls for each lane (POS_E to POS_A).
- Calculate the arithmetic mean of these geometric means for all sample lanes.
- Divide this arithmetic mean by the geometric mean of each lane to generate a lane-specific normalization factor.
- Multiply the counts for every gene by its lane-specific normalization factor.
It is expected that some noise will be introduced into the nCounter assay due to variability in sample input. For most experiments, normalization of sample input is most effectively done using so-called housekeeping genes. These are mRNA targets included in a CodeSet which are known to or are suspected to show little-to-no variability in expression across all treatment conditions in the experiment. Because of this, these targets will ideally vary only according to how much sample RNA was loaded.
Using the geometric mean of three housekeeping genes, at minimum, to calculate normalization factors is highly recommended. This is done in order to minimize the noise from individual genes and to ensure that the calculations are not weighted towards the highest expressing housekeeping targets. It is important to note that some previously-identified housekeeping genes may, in fact, behave poorly as normalizing targets in the current experiment, and may therefore need to be excluded from normalization.
The procedure is the same as that for Positive Control Normalization:
- Calculate the geometric mean of the selected housekeeping genes for each lane.
- Calculate the arithmetic mean of these geometric means for all sample lanes.
- Divide this arithmetic mean by the geometric mean of each lane to generate a lane-specific normalization factor.
- Multiply the counts for every gene by its lane-specific normalization factor.
Samples with normalization flags have counts for either the positive controls and/or housekeeping genes that are much lower or higher than most of the samples included in the analysis. Samples with a Positive Control Normalization flag may indicate a notable difference in hybridization/assay performance as compared to most of the samples included in the analysis. In certain situations, samples with these Positive Control Normalization flags may need to be re-run/excluded. Samples with a CodeSet Content Normalization flag may indicate a notable difference in RNA quality and/or input amount as compared to most samples included in the analysis. Samples will have CodeSet Content Normalization flags if the CodeSet Content Normalization factor is < 0.1 or >10, as anything beyond these values will result in inaccurate normalization. As such, samples with CodeSet Content Normalization flags may need be excluded or (if possible) re-run at higher or lower input amounts depending on the normalization factor.
The best approach for normalizing miRNA data will depend mostly on the sample type they represent. For everything except biofluids (such as plasma or serum), using a “global” normalization method which normalizes to total counts of the 100 most highly expressed (on average) miRNA targets across all samples is recommended. This is called the TOP 100 method in the software. Importantly, this method does not use the Positive Control or Positive Ligation Control probes for any of these calculations.
However, it does get more complicated with biofluids (or any other sample) where the number of expressed targets drops below ~150-200 targets. As a frame of reference, targets expressed above background are usually identified by comparison to the Negative control probes (either the mean, mean +2 Standard Deviation, the maximum value of the NEG probes, or 100 to be conservative).
When normalizing samples from biofluids, a judgement call can be made depending on how many targets are expressed above background. In the miRNA assay, background would usually be ~30 counts, but will vary from one experiment to the next. Therefore, sometimes a global approach (TOP 100 method) can still work with biofluids if samples express 100-150 miRNA targets above this cutoff.
However, if this is not the case, the identification of good “housekeeper” miRNAs will likely allow you to normalize and obtain robust results. There are not many well-characterized housekeeper miRNA targets from plasma or other biofluids, as they do seem to vary depending on extraction kits and pathologies being studied. Consequently, a literature search would not necessarily help you determine appropriate housekeepers and a more data-driven approach would be better suited. Using third party software or algorithms can identify the most stably expressed targets within the particular experiment. It is recommended that this method of identifying housekeeping genes be repeated as more data is generated to confirm these are appropriate for the entirety of the study and not just for the initial experiment.
The path of least resistance on published algorithms for Stable Housekeeper gene identification is NormFinder, because it is free and easy to use.
Claus Lindbjerg Andersen, Jens Ledet Jensen and Torben Falck Ørntoft. Cancer Res 2004;64:5245-5250.
geNorm is another program that uses slightly different principles. Specifically, NormFinder chooses targets with the lowest within and between group variance, while geNorm also picks multiple targets that give the lowest estimates of variance when they are used together (NormFinder only picks them individually or gives the best two together). geNorm can be obtained with a license.
If Spike-In synthetic miRNAs are used to normalize variance introduced in purification of samples, it is assumed and highly recommended that equal volume inputs are used across samples. Synthetic oligos must be spiked in before sample extraction, and it is strongly recommended that Spike-Ins are used for all samples in that experiment.
THREE METHODS for NORMALIZATION
- Normalize using only the Spike-In control probes
- Normalize using only the Housekeeping miRNA targets as identified by the user.
- First normalize all the endogenous counts (including the putative miRNA housekeepers) to the Spike-In control probes. Then use the spike-In normalized miRNA housekeeper counts to normalize the endogenous miRNA targets. This option is not available in the nSolver software so it would need to be performed in Excel. The basic workflow is below:
WORKFLOW in EXCEL for normalization (only for step 3 above)
1. For each lane calculate the geometric mean of the Spike-In controls.
2. Calculate the arithmetic mean of these geometric means across all lanes.
3. Divide this arithmetic mean by the geometric mean in each lane (calculated in #1) to get a lane-specific normalization factor.
4. Multiply all the endogenous counts in a lane its lane-specific normalization factor.
5. Repeat 1 through 4 using the Spike-In normalized housekeeper miRNA targets.
The three methods for normalization may yield similar results. Typically, the better normalization approaches will result in overall lower variance. Below is an example graph depicting what would be expected of a typical normalization method. For each of the three methods, variance should be calculated, and the lowest variance method should be chosen. Theoretically, the third method provides the best reduction in technical and sample input variance.
The Housekeeping (HK) Gene selection in the Advanced Analysis is performed by default using the geNorm algorithm shown in the below paper:
Vandesompele J, De Preter K, Pattyn F, Poppe B, Van Roy N, De Paepe A, et al. Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome biology. 2002;3(7):research0034.
The geNorm algorithm assumes that HK gene expression does not change across all samples, irrespective of the experimental condition. Based on that assumption, geNorm expects that the ratio between HK Gene A and HK Gene B within sample 1 will be the same as the ratio between HK Gene A and HK Gene B in sample 2, and sample 3, etc. If that is not true for the dataset, the aberrant gene is not used as a HK gene for normalization. As such, geNorm looks at the different ratios between the potential HK genes and iteratively removes HK genes that do not perform as expected. In the end, it retains a set of optimal HK genes to be used for the final normalization. A more detailed explanation can be found in the Advanced Analysis User Manual (MAN-10030) on page 41 here.
Within a typical nSolver workflow, HK genes can be selected by the user based on their variability across samples (%CV) and/or their average expression levels. Overall, this is a relatively good approach. However, there is a risk that less optimal HK genes are chosen due to variability in the data that artificially decrease the %CV. Consequently, %CV can sometimes lead to a bias due to variability in the data that can coincidentally result in artificially stable genes. Similarly, this can make genes that are normally stable appear unstable due to input variability. In the latter situation, geNorm will still be able to identify that gene as a good housekeeping gene (as it is ratio-based within samples), whereas the %CV method will discard the gene. Therefore, relying solely on %CV is not recommended. A consistent trend amongst the annotated housekeeping genes must also be considered.
- FOV (field of view) registration is as close to 100% as possible, but minimally 75%.
- Binding density is in the linear dynamic range (between 0.05-2.25 for MAX/FLEX, 0.1-1.8 for SPRINT)
- POS controls (POS_A to POS_E) have robust counts and are in a linear range (R^2 higher than 0.95)
- NEG controls have low counts (average < 50 is expected)
- At least three housekeeping genes have reasonable counts that are above background and cover the range of gene expression (counts in the thousands and counts in the hundreds, etc)
The nCounter systems monitor the amount of barcode bound to each sample lane in a cartridge by calculating a metric called Binding Density. More precisely, the binding density for each sample is equal to the number of fluorescent spots per square micron that are bound to each sample lane in the cartridge. Saturation of barcodes bound to the cartridge surface can potentially compromise data quantification and linearity of the assay.
Within nSolver 4.0 (if using a version other than 4.0, consult the user manual for that version), the default, optimal range for Binding Density is:
- 0.1 – 2.25 for MAX/FLEX instruments
- 0.1 – 1.8 for SPRINT instruments
Binding densities flagged for being greater or less than the optimal range do not necessarily indicate assay failure. Closer inspection the data to determine the specific impact, if any, of high or low binding density on the data is highly recommended.
A combination of several factors can affect binding density, including:
- Assay input quantity: the higher the amount of input used for the assay, the higher the Binding Density will be. The relationship between input amount and Binding Density is linear until the point of assay saturation. Conversely, if the amount of sample input is too low, the Binding Density will likely be flagged for being less than the optimal range.
- Expression level of genes: if the target genes have high expression levels, there will be more molecules on the lane surface which will increase the Binding Density value.
- Size of the CodeSet: a large CodeSet with probes for many targets is more likely to have high Binding Density values than a CodeSet with probes for fewer targets. A small CodeSet with a limited number of targets is more likely to have low Binding Density values.
The normalized data in Advanced Analysis can be downloaded by navigating to the “Normalization” tab in the HTML report containing the analysis results and clicking on the green button labeled “All Normalized Data” (see regions circled in red below). The normalized data is formatted as a CSV file which can then be opened in Excel. Additionally, the normalized data file can be found in the “Normalization” folder within the Advanced Analysis output file written to the user’s computer.
- First, make sure that you “unhide” hidden folders on your computer. Go to the control panel and, using the search bar, look for “Folder Options” (if using Win 7 or 8) or “Folder Browser Options” (if using Win 10). Open Folder Options and click the “View” tab. Within the “View” tab, find “Show hidden files, folders, and drives” and select it. Click Apply and exit. Hidden files and folders should now be visible.
- Navigate to
- Copy the folder and zip the copy to USB
- On the computer being transferred to, if nSolver is already present, find the nSolver4 folder at the above location and rename (or move to another location).
- Unzip then place the nSolver4 folder to be transferred into the same location on the computer.
(A) Using Finder
- Open terminal Type in the following command:
defaults write com.apple.finder AppleShowAllFiles YES– this will make the invisible files show up using Finder (warning if you have never done this there are lots of invisible files and directories).
- Copy the nSolver4 directory to be transferred (
C:/Users/<user name>/.nSolver4… don’t forget the preceding dot in .nSolver4) to a flash drive or zip the copy for transport via file share.
- On the other Mac, type in same terminal command to show hidden folders, and then re-name the existing .nSolver4 directory (e.g. .nSolver4a, etc.), then unzip and drag and drop the replacement one in the same location.
- When finished go back to terminal on both systems and type:
defaults write com.apple.finder AppleShowAllFiles NO– and now everything will be back to normal.
(B) Using Terminal
You can use terminal to do all of this, which means that you don’t have to tell finder to show hidden files, type
ls -la list all files
Use the ditto command to copy (but copy/paste should work as well)
Making hidden files and folders viewable for Mac:
- Open a terminal window by finding “Terminal” in the Utilities folder in the Applications directory. At the prompt copy and paste the following:
defaults write com.apple.finder AppleShowAllFiles TRUE
- Hit Enter.
We do not recommend changing the RLF name as this can cause difficulties with data collection and analysis as well as lead to confusion if the data are analyzed in the future by someone unaware of the RLF modification. We strive to maintain the single correct version of each RLF file within our bioinformatics database. If you are seeing differences in content within a single RLF version, please contact email@example.com with the RLFs in question.
AA installation MAC
Here are some detailed instructions including MAC specific information for getting the AA module working:
- Install R 3.2.2, using this download link: R-3.3.2.pkg
- Install XQuartz, which you can obtain here: https://www.xquartz.org/
- Download the zipped file titled, nCounter Advanced Analysis 2.0.115.
Do not extract or unzip this compressed file. Since Macs will often do this automatically, you will likely need to re-zip this folder to install the module.
Save/move this file to a location of your choosing on your computer.
- Launch the nSolver™ Analysis Software.
- From the Analysis menu at the top of the home screen, select Advanced Analysis Manager.
- In the Advanced Analysis Manager window, click Import New Advanced Analysis.
- Browse to the directory where the nCounter Advanced Analysis 2.0.115 .ZIP file is located and click OK.
- Wait for import to complete and the name to populate automatically.
- Click OK to exit the Advanced Analysis Manager.