So I'm working with this huge bunch of files which we have provided from a data licensing company. The descriptions for instance are varchar200 for a column. However, running a suggestion scan shows some in excess of this. Which is a problem.
I would just have it scan the whole file and figure it out, but I don't want to be so obtuse to just max that value out.
So now I'm left with importing 128 files for 5 types by hand. Is there any other more sane way to manage this?
I want to clarify when I say "the descriptions" are provided to us by the data vendor in their documentation. So using a string200 failed, then string270 got me an extra couple million rows. Past that I can't scan the file in its entirety to find out where the widest part is in the column. I can't even open the file in any sane editor.
OK so I'm cheating kind of. I'm assuming there are some real scragglers in all this data so I've got a string size of 4000. Once its all in I'll run a query to determine the total maximum string size and reduce it after the fact.
Could have been an issue with the download too, or my computer corrupting it. Honestly is it unreasonable for me to request this already done in an SQL Server file? I mean I understand some people want start from ground zero, but this is just way too much work for anyone that isn't a data warehouse master.
I think however it may be an issue on there end too, because on 3 separate tables the same day are corrupt.