Hi I have a flat file with a very large number of tab delimited columns, something like 3700. The structure of the columns are
quote:ID start end col1 col2 col3 etc..... to col3700
I would like to spilt the text file into 3 tables on import to sql server each with about 1000 columns each as sql server won't handle many more than this. So each table should have the same three fields ID start End and columns to 1-1000 and the next table should be 1001-2000 etc.
So how can I split the file where columns are delimited by a tab and the cr/lf is only at the end of the row.
If you are using Import/Export wizard, one of the dialogs has an Edit Mappings button. click on it and select the columns you want to import. That will require you to have more than one package. If you are using BIDs, you can do something similar by having multiple dataflow tasks, and in each, pick the subset of columns that should go into the destination table.
thanks for the replies. It got me thinking. What I was doing wrong was trying to select the required columns from the source connection whereas you need to select all the columns in the source and create 3 destination connections and make the selections here. you can then import to SQL Server or export to text etc with a multicast.