Search    ENTER KEYWORD
MSDS Material Safety Data Sheet
CAS

N/A

File Name: cse_iitk_ac_in---ETL.asp
                          Data Proļ¬ling for ETL Processes
Maunendra Sankar Desarkar
IIT Kanpur



Abstract
The ultimate success of ETL processes ends with the delivery of correct, reliable, relevant
and complete information to end users. Hence data cleaning is an important part of any ETL
process. In this report, we look at some common errors in data stored in databases and describe
the functionality of our tool that helps to discover rules to ļ¬lter those dirty data. We also
present some idea about how to develop custom operators that will be able to perform some
data cleaning operation in Europa platform. Such operators will be of immense help in designing
ETL processes using Europa.
Keywords: ETL, data cleaning, data proļ¬ling, Europa operator


1 Introduction
Any companyā€™s proļ¬tability depends on its ability to make sound business decisions, based on
complete, accurate view of the customers, suppliers and transactions. However, this information
is not maintained at a single place, rather scattered throughout the enterprise - across multiple
departments, divisions and applications. Additionally, there are company mergers and acquisitions
which bring diverse operational systems together. Hence these organizations need a Business In-
telligence platform that can consolidate and deliver data from multiple locations into a single and
trustworthy source of information. Then the organization can do reporting, query and analysis,
performance management and take sound business decisions. However, the data present in the con-
solidated repository must be error-free for the decision to be an helpful one for the enterprise. As a
consequence, any data integration process must include a data cleaning phase. Obtaining a proļ¬le
or a set of business rules that are followed by the data can help in data cleaning. ETL processes
aim at data integration. Hence, data proļ¬ling becomes an important part of ETL processes. In
this report we discuss some of these rules. We also discuss how to perform data cleaning with the
help of these rules.
The organization of the report is as follows. In section 2, we give brief introduction to ETL. Section
3 contains some examples of dirty data. Section 4 introduces some concepts of data proļ¬ling. In
section 5 we describe the tool that we developed for data proļ¬ling. We introduce Europa in section
6 and describe the steps required for developing Europa custom operators in section 7. In the next
section we mention some possible future works. Finally we conclude in section 9.


2 ETL
ETL stands for extract, transform and load. The entire process is divided into three phases.

1
ā€? Extract: The ļ¬rst phase of an ETL process is to extract data from source systems. Most data
warehousing projects consolidate data from diļ¬?erent source systems. Each separate system
may use diļ¬?erent data organization formats. Source data may be in relational databases, text
ļ¬les (standard and delimited), XML, COBOL Copybooks etc. Additionally the ļ¬les might
be having diļ¬?erent data representation formats.

ā€? Transform: After extraction, the data is transformed, or modiļ¬ed, depending on the speciļ¬c
business logic involved so that it can be sent to the target repository. There are a variety of
ways to perform the transformation, and the work involved varies. Some examples are:

ā€? Derive a new calculated value (f inalprice = originalprice āˆ? (1 āˆ? discount))
ā€? Summarize multiple rows of data (Find total number of employees in that branch)
ā€? Join data from multiple sources (Get names of those employees who work in the location
where some particular product is developed)

Before reformatting the data to be available for use by the target schema, most ETL projects
perform data cleaning operation to remove inconsistencies present in the source data. This
is done during the transformation phase.

ā€? Load: The load phase transports and loads the data into to the data warehouse. Depending
on the requirements of the organization, this process ranges widely. Some data warehouses
simply overwrite old information with new data, some might do this in incremental fashion.
There are more complex systems which maintain a history and audit trail of all changes to
the data.

Several products are available in the market for preparing and executing ETL applications. Exam-
ples include IBM Websphere Datastage (IBM), PowerCenter (Informatica) etc.


3 Dirty data - some examples
A taxonomy of dirty data can be found in [KCH+ 03]. In this section we give some examples of
diļ¬?erent classes of dirty data.

1. Wrong data due to non-enforcement of enforceable integrity constants :

ā€? Duplicate entries instead of unique entries
ā€? Null values where null not expected

2. Unwanted data, constraints can not be speciļ¬ed :

ā€? Age is not updated, so Current age = Current date - Date of Birth
ā€? Work experience < 0, which is out of possible range

3. Spelling errors

4. Entry into wrong ļ¬elds

ā€? address in place of name


2
5. Use of abbreviations

ā€? T. J. Watson and Thomas J. Watson both entries are used
ā€? st used in address ļ¬eld instead of street

6. Diļ¬?erent ordering

ā€? Name used in order and order.


4 Data Proļ¬ling
Whenever we say some data present in the source is dirty, it means that there is some rule which
the data is supposed to follow, is violated. In data proļ¬ling, we try to ļ¬nd out those ā€œrulesā€?.
Hence proļ¬ling allows to analyze the data values to ļ¬nd areas that are incomplete, inaccurate
or ambiguous. It can also verify relationships across columns and tables. If during the proļ¬ling
stage we can ļ¬nd out the integrity constraints followed by the column, then ļ¬rst error listed in the
previous section can be eliminated. Similarly, if the proļ¬ling phase can ļ¬nd out the range of data
for the columns (where applicable), then the second error in the list can be eliminated. Removal of
other kind of errors (listed in the previous section) require more complex rules to be derived from
data.


5 Functionality of our tool
The tool that we developed for data proļ¬ling task can ļ¬nd out the following rules from the database.

1. Does the column allow null value?

2. Is it a unique key in its table?

3. Is it a categorical ļ¬eld? If yes, what is the domain?

4. Find the acceptable range for the column entries.

5. Find primary key - foreign key relationship.

The application allows the user to select a set of columns to analyze. It also allows the user
to select the rules he wants to check. Then it comes up with the rules by looking at the entries
present in those columns.
For checking nullability it ļ¬nds out the number of null entries in that column. If there is at
least one null entry, it means the column allows null values, since we assume the data source to be
clean. If there is no null value, then it reports the column under consideration as a non-nullable
column.
For checking uniqueness of the ļ¬eld, the application ļ¬nds out the number of distinct elements in
the column. If number of distinct elements is same as the number of entries present in the column,
it means that all the elements are distinct. In such cases, it declares the column as a unique key.
For the third rule, it calculates the ratio of the number of the distinct elements and the total
number of elements in the table. If the ratio is very low, it means that average frequency of the
elements is very high and the column can be a categorical ļ¬eld. When this ratio falls below some

3
predeļ¬ned threshold, the application declares it as a categorical ļ¬eld. The domain of acceptable
values is the set of the distinct elements in the column. We check this rule for the columns which
are of char or varchar type, and are of reasonably small length (length ā‰? some predeļ¬ned
threshold).
Finding PK-FK relationship is done only if both the columns are non-null and of the type
integer or smallint. First it tries to ļ¬nd out if the range of the ļ¬rst column is totally contained
in the range of the second column. If it is not, then the second column can not be a foreign key.
Otherwise it checks for the inclusion dependency of the ļ¬rst column in the second column. If this
checking gives positive result, the application declares the second column as a foreign key of the
ļ¬rst column.
The rules that the application outputs after analyzing the the columns may not hold in the
context in which the database is used. Basically the application provides the expert (user) a set
of possible (which is a narrowed down list of all the possible rules) rules which may exist in the
database. Like the user may say that the range is not exactly what is given by the application, but
it gives some idea about where the range lies. Hence it helps the user to ļ¬nd the rules those are
followed by the entries in the database.


6 Europa - a platform to design ETL applications
Europa is a platform for developing ETL applications [Sria]. It provides the user with a set of
operators, which can be used to create ETL jobs. Operators are of three types:

ā€? Source operators: Source operators are sources of data.
Example: database table, ļ¬‚at ļ¬le, JDBC - ODBC sources etc.

ā€? Transform operators: Transform operators accept the data from sources or other transform
operators, process them in some fashion and send the output of the processing to other
transform operator(s) or target operator(s).
Example: sort, ļ¬lter, union, intersection etc.

ā€? Target operators: Target operators are the targets where data can be stored after process-
ing. They generally consume data generated by transform operators.
Example: database tables, ļ¬‚at ļ¬les, JDBC - ODBC targets etc.

Operators accept input and provide output through ports. Number of input and output ports that
an operator will have can vary depending upon the use of the operator. Besides input-output ports,
operators have a set of Properties. These values are manipulated by the ETL project designer to
govern the behavior of this operator.
Any Europa ETL project contains a control ļ¬‚ow and one or more data ļ¬‚ows. Data ļ¬‚ow is a
directed graph of operator nodes interconnected by links used to indicate the ETL data transfor-
mation sequence. There can be multiple data ļ¬‚ows in a single project. Control ļ¬‚ows, on the other
hand, describe the sequence in which diļ¬?erent data ļ¬‚ows are executed. They also describe how
error recovery, notiļ¬cation and compensation activities are organized. Control ļ¬‚ows do not deal
with how exactly data is transformed - that is described in individual data ļ¬‚ows. The applica-
tion developer generates the data ļ¬‚ows and control ļ¬‚ows which gives high level description of the
project. Data ļ¬‚ows are saved in the Europa data ļ¬‚ow XML format, while control ļ¬‚ow is in BPEL.


4
These codes are automatically generated by Europa. Once code for the control ļ¬‚ow is generated,
the application can be run on Websphere Application Server.


7 Developing custom operators for Europa
Europa is built using eclipse plug-in architecture. Its functionalities can be extended by using
custom plug-ins. Users can build their own custom operators and place them in an operator
library. Once the operator library is registered to the data ļ¬‚ow system, the operators contained in
that library become available for use. Operators and operator libraries are in XML format. Below
we discuss how operator libraries and custom operators can be created [Sria], [Srib].

7.1 Add an operator library to the Plug-in
Create an operator library and place your operator there.
Step 1: Create a new plug-in project. It will have a plugin.xml ļ¬le.
Step 2: Add dependencies to the plug-in. Dependencies vary according to the task that the
operator is supposed to perform. Each such dependency will add one import entry in the plugin.xml
ļ¬le. For example, after adding the com.ibm.datatools.etl.dataļ¬‚ow.core and
com.ibm.datatools.etl.dataļ¬‚ow.base.oplib plug-ins as dependencies to your plug-in, the plu-
gin.xml ļ¬le will contain the following lines.







Step 3: Create an operator library. For doing this, in the plugin.xml ļ¬le, choose the extensions
tab. Add a new extension. Choose the operatorLibraries extension. Add an operator library to
the operatorLibraries extension. Fill in the libFileName attribute with the name of the operator
library ļ¬le that you are going to create. The ļ¬le nameā€™s path is relative to the plug-inā€™s home
directory. After adding the ļ¬le name, the plugin.xml ļ¬le will contain the following lines. (Suppose
the operator library ļ¬le name is sampleOp.oplib).






Step 4: Edit the oplib ļ¬le. It must have a unique nsURI attribute. The following is an example
of an empty oplib ļ¬le.

xmlns:opLib="http:///com/ibm/datatools/etl/operatorlibrary.ecore"
xmlns:coretypes="http:///com/ibm/datatools/etl/coretypes.ecore"
xmlns:SQLDataTypes="http:///org/eclipse/wst/rdb/models/sql/datatypes.ecore"
name="europa_sampleOp_lib"

5
nsURI="http:///org/europa/sampleOp/lib/sampleOp.oplib">




Step 5: Create a category. Operators and their details are put in categories. An example is
given below. It creates an operator library named sampleCat.





Step 6: Design the operator. Select the category where you want to put it. Place the details
of the operator inside the and tags of the category. The name and the
codeGeneratorClass of the operator are mandatory in its description. You will also have to mention
its input, output and properties and parameters, wherever applicable.

ā€? name is the identity by which the platform and the ETL application developer will know
and refer the operator.

ā€? codeGeneratorClassā€™es className attribute denotes the name of the java class that will
generate the code for this operator. This class is to be created by you.


className="org.europa.sampleOp.codegen.codeGenForSampleOp"/>



This code snippet means that we are designing a sampleOp custom operator, its code will be
generated by the sampleOp class. The path of the sampleOp class is

org\europa\sampleOp\codegen\sampleOp.

(operator name and code generator class name can be diļ¬?erent also).

ā€? input and output attributes deļ¬ne the input and output ports of the operator. One input
port can accept data from one operator only, whereas one output port can be connected
to multiple operators. Number of input and output ports that an operator can have in a
dataļ¬‚ow can be limited by setting the lowerBound and upperBound the input and output.
upperBound = -1 means there is no limit on upper bound, and you can open as many ports
as you want.







6
ā€? property of an operator is analogous to parameters in function calls in any high level pro-
gram. Designer can set the value of the property and that value can be used by the operator
to perform its functionality. For example, File Export operator has a property called File
Name. This value can be set according to the need of the user and can be used in the code
generation phase. These properties may be of diļ¬?erent ā€˜data typesā€?, for example, Integers,
Lists, Strings, Expressions, Database Table, File etc. You can deļ¬ne custom property types
also. For each property type, a GUI ā€œeditorā€? can be associated. This editor class and other
artifacts would be invoked in the Europa Data Flow Design editor, and whenever an oper-
ator has a property of this type - one can enter the value of the property from that editor.
Pre-deļ¬ned property types already have property editors deļ¬ned and similar editors can be
deļ¬ned by custom operator developers. This is how you can deļ¬ne a property of an operator.






ā€? Properties, inputs and outputs can be deļ¬ned to be dependent on each other in the operator
deļ¬nition. In some cases, changes in one (the cause) can be used to cause changes in its
dependent objects. Such dependencies may be speciļ¬ed using ā€?ā€? tags in the
operator deļ¬nition.

For example, the Filter operator allows the user to select the input table and the ā€˜ļ¬lterEx-
pressionā€?. This ā€˜ļ¬lterExpressionā€? is a property of type condition. It has a condition property
editor associated with it, where you can enter the condition for ļ¬ltering. The condition prop-
erty editor has a list of the available columns from the input table. So the ā€˜ļ¬lterExpressionā€?
property depends on the input table. This is captured in the following lines, which are there
in the description of the Filter operator.







ā€˜nameā€? attribute of the param tag tells the name of the input, output or property on which
this property depends, and ā€˜typeā€? denotes what is its type - input, output or property.

Step 7: Create the code generator class. Here is an example code generator class.


import java.util.List; import
java.util.Vector;

import com.ibm.datatools.etl.codegen.IGenericCodeGenerator;
import com.ibm.datatools.etl.codeunit.CodeType;

7
import com.ibm.datatools.etl.codeunit.CodeUnit;
import com.ibm.datatools.etl.common.CodeGenOptions;
import com.ibm.datatools.etl.dataflow.Operator;
import com.ibm.datatools.etl.util.CUHelper;
import com.ibm.datatools.etl.codeunit.Action;
import com.ibm.datatools.etl.common.Phase;

public class codeGenForSampleOp implements IGenericCodeGenerator {

Operator opInst = null;
CodeGenOptions options = null;
CodeUnit cu = null;

public boolean init(Operator op, CodeGenOptions opts) {
opInst = op;
options = opts;
System.out.println("DEBUG:== operator name = " + op.getItemName() +
", label = " + op.getItemLabel());
return false;
}

public List getCodeUnits() {
Vector cuList = new Vector();
CodeUnit customOpCodeUnit=CUHelper.createCU(
CodeType.SQLSCRIPT_LITERAL,
Action.EXECUTION_LITERAL,
Phase.RUNTIME_LITERAL);
String fileName = (String)opInst.getPropertyValueRef("fileName");
cuList.add(customOpCodeUnit);
return cuList;
}
}



Here is an example of an operator library. The name of the library is sampleLib. It has a
custom operator deļ¬ned in it. Name of the operator is sampleOp.

xmlns:oplib="http:///com/ibm/datatools/etl/operatorlibrary.ecore"
xmlns:coretypes="http:///com/ibm/datatools/etl/coretypes.ecore"
xmlns:SQLDataTypes="http:///org/eclipse/wst/rdb/models/sql/datatypes.ecore"
name="europa_customOp_lib" nsURI="http:///org/europa/sampleOp/lib/sampleOp.oplib">




8
className="org.europa.sampleOp.codeGenForSampleOp"/>








7.2 Deļ¬ne design time GUI artifacts
Once the backbone of the operator is deļ¬ned, the next task is to make the custom operator appear
in the operator palette in the Europa runtime workbench.
Step 1: Create another plug-in project. Add the plugins com.ibm.datatools.etl.dataļ¬‚ow.ui
and the com.ibm.datatools.etl.dataļ¬‚ow.base.oplib.ui as dependencies.
Step 2: Add the com.ibm.datatools.etl.properties.ui.OperatorLibraryPresentation
extension. Enter the OpLibPresentation ļ¬le name in the plugin.xml ļ¬le. The ļ¬le name must have
.prxmi extension. After this, the plugin.xml ļ¬le will look something like this. (suppose the name
of the OpLibPresentation ļ¬le name was sampleOp.prxmi.


id="org.europa.sampleOp.lib.ui"
name="Ui Plug-in"
version="1.0.0"
provider-name="">










id="org.europa.sampleOp.lib.ui.Presentation"
name="Sample Custom Operator Presentation"
point="com.ibm.datatools.etl.properties.ui.OperatorLibraryPresentation">







9
Step 3: Create and edit the presentation ļ¬le (the one with prxmi extension). Add header and
footer to that and an empty palette category to it.


xmlns:pres="http://com.ibm.datatools.etl.dataflow.presentation">

id="sampleCat"
label="sampleOps_category_name"
description="sampleOps_category_desc"
appliesToEditorType="DataFlowEditor"
smallIcon="/icons/sampleOpCategory.gif"
/>



Step 4: Add an palette entry for the custom operator. Put it inside the scope of a category.
Here is an example.

id="sampleCat.sampleOp"
label="sampleOpName"
description="sampleOp_desc"
categoryID="sampleOpPaletteLib"
widgetTypeID="http:///org/europa/sampleOp/lib/sampleOp.oplib/sampleCat/sampleOp"
appliesToEditorType="DataFlowEditor"
smallIcon="/icons/sampleOpIcon.gif"
/>


The icon ļ¬les have to be placed in the icons subdirectory under the ui plugin directory. Be sure
about the icon names. The type of the icon ļ¬le and the name should be exactly as you have
speciļ¬ed in the prxmi ļ¬le. In the examples above, the entries ā€œsampleOps category nameā€?, ā€œsam-
pleOps category descā€?, ā€œsampleOpNameā€?, ā€œsampleOp descā€?, are analogous to variables in pro-
gramming language terminology. Their exact values can be deļ¬ned in plugin.properties ļ¬le. Edit
the ļ¬le (create it ļ¬rst if it does not exist) and enter the values of the above mentioned variables.

sampleOps_category_name=My custom operators
sampleOps_category_desc=This category contains my own operators

sampleOpName=sample operator
sampleOp_desc=This is a sample operator


Now the operator is ready for use. You can create a runtime workbench and execute it. You
can see the operator in the operator palette.


10
8 Future Work
The rules that can be found out by our application can be used to ļ¬lter some classes of dirty
data. However, there are many complex rules which are very common in databases. For example,
ordering relationship between two columns, algebraic relationship between two or more columns.
Sometimes the value in one column restrict the range or domain of some other column. Once these
rules are discovered from the source data, the power of the data cleaning phase will increase a lot.
Focus can be given on generating Europa operators having complex processing power. Currently
the ETL process designer has to draw the data ļ¬‚ow and the control ļ¬‚ow diagrams. His work can
be reduced if data ļ¬‚ows can be generated automatically.


9 Conclusion
In this report we have discussed why data proļ¬ling is needed for ETL processes. We have seen
some classes of dirty data. We have described the functionality of our data proļ¬ling tool and
the technique to come up with the rules. These rules (or their modiļ¬ed versions) can be used to
eliminate some type of dirtiness. However, coming up with more sophisticated rules can ensure
more quality data to be sent to the target data repository. We have also described how to create
custom operators for Europa platform and make them available for creating data ļ¬‚ows in Europa.

Acknowledgement

I am thankful to my mentor, Natwar Modani, for his help, support and guidance throughout
this work. I am indebted to Mr Mukesh Mohania, for many fruitful discussions. I have been
very lucky to interact with other employees in IBM India Research Lab, for their suggestions and
constant encouragement. Special thanks to Mr Dinkar Rao for explaining everything in minute
details whenever I sought help from him.


References
[KCH+ 03] Won Y. Kim, Byoung-Ju Choi, Eui Kyeong Hong, Soo-Kyung Kim, and Doheon Lee.
A taxonomy of dirty data. Data Min. Knowl. Discov., 7(1):81ā€?99, 2003.

[Sria] Sriram Srinivasan. Europa Developer note.

[Srib] Sriram Srinivasan. Ibm dwe etl (europa).




11

Search    ENTER KEYWORD
ALL Chemical Property And Toxicity Analysis PAGES IN THIS GROUP
NAMECAS
colgate_com_au---PO_Dish_Original.asp N/A
commercialoil_ca---CONCRETE_RELEASE_CPD.asp N/A
conncoll_edu---triglyceride-reagent.asp 69-65-8 26628-22-8 9004-02-8
controlsolutionsinc_com---129_Dominion_75_WSP_MSDS.asp 138261-41-3
cooperhandtools_com---Solder_Wick_MSDS_English.asp 7440-50-8 8050-09-7
coppercarewoodpreservatives_com---20030421-SMDC-MSDS.asp 137-42-8
corning_com---007740MSDS_PyrexBorosilicateGlass.asp 65997-17-3
corning_com---40048MSDS.asp 75-12-7 9048-46-8
corning_com---Liquid_Sodium_Borohydride_MSDS.asp 143-24-8 16940-66-2
correcttouch_com---humate_msds.asp 7647-14-5 68-04-2 56-40-6 7732-18-5
co_napa_ca_us---written_program.asp N/A
cpsc_gov---464.asp N/A
cranesiding_com---msds-mod-2-28-01.asp N/A
crcindustries_com---4023.asp N/A
crcindustries_com---4057.asp 64742-52-5 1317-33-5 7782-42-5
crcindustries_com---4059.asp N/A
crestauto_com---Hand_Cleaners_MSDS_-_062005.asp 1317-65-3 62-93-0 1119-40-0 51200-87-4 5989-27-5 141-43-5 64742-48-9 68551-17-7 68551-12-2 60742-59-9 9016-45-9 8009-03-8 57-55-6 10097-28-6 60676-86-0
crossingsinc_com---COPPERTU.asp 7440-50-8
cscpl_com---MSDS-BSU.asp 18297-63-7
cscpl_com---MSDS-HMDSO.asp 107-46-0
cse_iitk_ac_in---ETL.asp N/A
csr_com_au---CSR_Acrylic_based_compounds.asp N/A
ctint_org---Mononine_msds.asp 7647-14-5 69-65-8 7006-35-1 1310-73-2 7647-01-0 7732-18-5
custombuildingproducts_com---2060-_72_PolyBlend_Ceramic_Tile_Caulk.asp 85-68-7 8052-41-3 1317-65-3
cyclo_com---T-65864_2003.asp N/A
dansoap_dk---A022.asp 2007-06-0 64742-82-1
dap_com---10011.asp 107-21-1 50-00-0 140-88-5
daycon_com---52480-tech.asp N/A
daycon_com---RTUAWESOME-tech.asp 1839-83-1
daycon_com---spec34.asp 84133-50-6 64-17-5 7732-18-5
deccanfuse_com---DIEPL_-_2006__MSDS_SHEET.asp N/A
deconsolutions_com---MSDS_872-200-0503.asp 7722-84-1 68607-29-4 68391-01-5 25395-31-7 67-63-0
decpanels_com---MSDS-AquaTile-Embossed.asp 8001-26-1 10028-22-5 13463-67-7 1332-58-7 64-17-5 7631-86-9 50-00-0 108-88-3 1569-01-3 71-36-3 100-41-4 1330-20-7
dentronix_com---ProSprayMSDS.asp 90-43-7 120-32-1
desiccant_com_cn---msds-topsorb_minaral_desiccant.asp 86-21-5
dgs_greenhome_com---ORB_CarpetCleander_MSDS.asp 67-63-0 108-32-7
diagnostics_siemens_com---L2PHZ-02.asp N/A
diagnostics_siemens_com---T03-4485-01_GHS_MSDS_C.asp N/A
diagnostics_siemens_com---TKRC-01.asp 14158-31-7 26628-22-8
diamedix_com---MSDS-Compiled_EZ_Complement-Rev0-Jan05.asp 26628-22-8
dick-blick_com---DBH_006801009.asp N/A
dixiechemical_com---13-Dibenzylglycerol_DBG_DIXIE_MSDS.asp 6972-79-8 100-51-6
doc_state_nc_us---55-220msds.asp 68391-01-5 68956-79-6 79-14-1 127087-87-0 77-32-1
dodgeco_com---Metaflow.asp 57-55-6 67-68-5
dodgeco_com---Syn_Gel_HV.asp 50-00-0 67-56-1 67-68-5
dot_ca_gov---Pollutant.asp N/A
download_soprema_ca---colgrip-e.asp N/A
doyourownpestcontrol_com---Advion_fire_ant_bait.asp 173584-44-6
dpo_uab_edu---msds-calciumhydrox.asp 1305-62-0
dsc_dixie_edu---000100.asp 67-64-1

Free MSDS Search ( Providing 250,000+ Material Properties )
Chemcas.com | Ads link:HBCCHEM.INC