I have a procedure which imports from Oracle tables using Openquery to do a Select Into tables in SQL Server. The SQL is dynamic-SQL where the table name is passed in and ran on a loop. The code fired for a particular table is:
SELECT * INTO myTempTable1 FROM OPENQUERY(myLink,'select * from table1)
The temp table is then used to update final table in SQL Server. Then the temp table is deleted.
The problem is that a varchar2(30) column in Oracle is converted to nvarchar(30) in SQL Server. This is causing some data to be truncated.
Yes varchar2(30) needs to be mapped to varchar(30), but it is not happening automatically. Since this is a SELECT * INTO I am not specifying the column types, just relying on SQL Server to correctly chose the matching type. This is my problem: how to get SQL Server to realise that varchar2(30) should map to varchar(30) not nvarchar(30).
If you insert data from a VARCHAR(30) column (or its Oracle equivalent, assuming it is equivalent) into an NVARCHAR(30) column, the data should not get truncated. If the data is getting trucated, the source of the problem may be somewhere else.
Do couple of experiements:
1. Remove the "INTO myTempTable1" part, do you see all the data coming through correctly?
2. For testing purposes, create the destination table and then use INSERT INTO myTempTable1 SELECT * FROM...