Quantcast
Channel: Teradata Downloads - Connectivity
Viewing all 445 articles
Browse latest View live

Sqoop Import from Teradata with 30 lines of SQL query using –query is failing

$
0
0

Hi,

 

I was trying to import data from teradata into hadoop using sqoop command. This import includes joining of tables on teradata and import the results to hive. I'm using the --query option of sqoop to insert the SQL query. The SQL query which I'm using is more than 30 lines. The import is failing with the following error.

 

14/01/10 11:30:28 INFO manager.SqlManager: Using default fetchSize of 1000

14/01/10 11:30:28 INFO tool.CodeGenTool: Beginning code generation

14/01/10 11:30:29 ERROR manager.SqlManager: Error executing statement: com.teradata.jdbc.jdbc_4.util.JDBCException: [Teradata Database] [TeraJDBC 14.00.00.01] [Error 3707] [SQLState 42000] Syntax error, expected something like ';' between an integer and '('.

com.teradata.jdbc.jdbc_4.util.JDBCException: [Teradata Database] [TeraJDBC 14.00.00.01] [Error 3707] [SQLState 42000] Syntax error, expected something like ';' between an integer and '('.

 

 

I tried "sqoop eval" to check whether Sqoop can handle such large query.  It was successful. It returned me the result on the putty console. But when I use the same SQL in the import command it is not working. I'm using the following command.

 

sqoop import -libjars /usr/lib/sqoop/lib/tdgssconfig.jar,/usr/lib/sqoop/lib/terajdbc4.jar --driver com.teradata.jdbc.TeraDriver --connect "jdbc:teradata://111.111.111.11/DATABASE=vedw" -m 1 --username uname --password pwd --hive-table PRED_CUST --hive-import --query "SELECT query with JOINS and WHERE \$CONDITIONS" --target-dir /user/hdfs/PRED_CUST

 

Following is the sqoop eval command which is working fine.

 

sqoop eval -libjars /usr/lib/sqoop/lib/tdgssconfig.jar,/usr/lib/sqoop/lib/terajdbc4.jar --driver com.teradata.jdbc.TeraDriver --connect "jdbc:teradata://111.111.111.11/DATABASE=vedw" -m 1 --username uname --password pwd --query "SELECT query with JOINS and WHERE"

 

Please help me with this if there is anything wrong I'm doing. Or please suggest me some workaround. Thank you.

 

Srikanth

Forums: 

Which Teradata drivers support data encryption?

$
0
0

We have a 12 year old classic ASP web application that uses an ODBC Teradata driver.  It does only selects from the database.  We are being told by our local Teradata administrator to use the new .net version 14 driver.  Does anyone know if this driver will work with classic ASP or will some version of the ODBC or OLEDB Teradata drivers support data encryption as well?

Forums: 

Connection Profile for Vertica in Teradata SQL Assistant

$
0
0

Hi,
I have been connecting to Vertica using Teradata SQL Assistant 12 (which requires an ODBC driver). However, in case of Teradata SQL Assistant 13.11 (Java Edition), I am unable to create a connection profile for Vertica. Even if I go for a "Generic JDBC" connection and provide the details using "New Driver Definition", I am unable to connect to Vertica.
Can anybody please help with this?
Thanks!
S. Kumar

Forums: 

How to disable DNS COP lookup at login

$
0
0

We have Cold Fusion 8 on Linux in the DMZ. Teradata 14 resides in the LAN. Logins from the DMZ (jdbc) are painfully slow. The network admin sees repeated DNS requests when logins are attempted. The DNS server is on the LAN. DNS requests are not allowed through the firewall.
We would like the connection request to use the /etc/hosts COP entries on the Cold Fusion server...without attempting DNS lookups. If the jdbc connection is defined with just an IP address (no COP entries)...it works fine.
Can we disable the DNS portion of logins when using jdbc?

Forums: 

Upgrade Client ODBC Drivers

$
0
0

Being not an expert at upgrades, I have an old client GSS/ODBC install version 3.06 & would like to upgrade to the latest V14 drivers. The old drivers are located in an NCR folder, & from initial checks the V14 installs under Teradata folder.
How do I sucessfully update?

Forums: 

Pasting OLE objects into BLOB fields via MS Access

$
0
0

Hello
I'm writing a MS Access application using Teradata as a back end. One requirement is to be able to store OLE objects (campaign selection flow charts in this instance, but othjer types fail). However when I try to save the record it fails with this message
[ODBC Teradata Driver] Binary String Truncated (#0)
'Use Native Language Support' is enabled in the OBC Driver version 14.00.00.04
Anybody have any sugguestions?
Peter
 

Forums: 

Export from Teradata to MySQL using Java

$
0
0

Hello all,
 
Thanks in advance for reading this.  
 
I am trying to pull data from Teradata and load it into MySQL without hitting the disk.  
 
I am currently doing it with a perl script and connecting BTEQ to LOAD DATA INFILE with a named pipe in linux.  
 
Now I am trying to convert this perl script into java so that I can integrate it with our other ETL processes and would also like to get away from manipulating command line executables.  Now I know I can do LOAD DATA INFILE through the MySQL JDBC driver and read from a pipe.  What I have yet to figure out is how to read the data, using a pipe (or PipedOutputStream), from Teradata.
 
Can anyone point me in the right direction?  What tool or library should I be using here?

Once I figure this out, I will be sure to come back and post my results.
Thanks again,
Dan

Forums: 

td wallet for jdbc

$
0
0

Hi,
Can someone share the idea how can we implement tdwallet using jdbc connection.
Thanks and regards,

Forums: 

.Net connection using explicit IP Address

$
0
0

I am using the Teradata 13.10 virtual machine and I have assigned it a static IP address (192.168.88.10) for host only access.  I am able to connect via BTEQ  - .LOGON 192.168.88.10/dbc,dbc - and with SQL Assistant.   When I try to connect via the .NET provider, I get an error saying that it cannot resolve 192.168.88.10 to an IP address.
Adding an entry to my hosts file or changing DNS configuration is not an option (locked down PC's in a corporate environment). 
My code:

    class Program
    {
        private static string server = "192.168.88.10";
        private static string user = "dbc";
        private static string password = "dbc";

        static void Main(string[] args)
        {
            TdConnectionStringBuilder connstr = new TdConnectionStringBuilder();
            connstr.DataSource = server;
            connstr.UserId = user;
            connstr.Password = password;
            connstr.AuthenticationMechanism = "TD2";
            connstr.DataSourceDnsEntries = 0;
            connstr.ConnectionTimeout = 0;
            
            TdConnection tdConn = new TdConnection();
            tdConn.ConnectionString = connstr.ConnectionString;


            tdConn.Open();

            tdConn.Close();

        }

The exact error I get:
{"[.NET Data Provider for Teradata] [115006] Could not resolve DataSource=[192.168.88.10] to an IpAddress.\r\n[.NET Data Provider for Teradata] [115006] Could not resolve DataSource=192.168.88.10 to an IpAddress."}
 
The connection string the above code yields:
Connection Timeout=0;Authentication Mechanism=TD2;User Id=dbc;Data Source=192.168.88.10;Password=dbc;Data Source DNS Entries=0
Teradata.Client.Provider  Version 13.1.04
My best guess is that is trying to do a DNS resolution on the hostname 192.168.88.10 - which my DNS server knows nothing about.  How do I get the .NET provider to connect directly to the IP address and skip name resolution?
 
 
 
 
 
 

Forums: 

Teradata14, Unica and ODBC

$
0
0

Hi
We currently use Teradata version 13 and have no issues connecting to Unica version 6 via ODBC connection.
We are soon upgrading to Teradata 14 and have been told it can't be connected to an older version of Unica?  Unless Teradata 14 has changed the way you can connect applications, such as removing ODBC functionality, I cant understand what would stop us from continuing to connect via an ODBC link from Unica to version 14?
Any one else using Teradata 14 with ODBC connections?
Thanks

Forums: 

SAS via OLEDB and Null Values

$
0
0

I am trying to load a small SAS table into Teradata via SAS/ACCESS connectivity  & OLE DB.
The table is mostly NULL and consists of: id, SiteA, SiteB, SiteC, SiteD.  The Site variables are populated with "Yes", "No", or SAS NULL ''.  Only one column is populated at any time.  Therefore the data looks as follows: 

Id       siteA     siteB     siteC     siteD
123      yes  
234                 no
345      no
456                           yes

 When this is transferred to Teradata, it comes out garbled as follows:

Id       siteA     siteB     siteC     siteD
234      yes       no
123      yes
456      nos       no        yes
345      no        no

I have tried the NULLCHAR= , NULLCHARVAL= and DBNULL = options and have not seen an impact.  It appears that the buffer isn't getting fully cleared before moving on to the next row as you can see that id = 456 has retained the "no" and the 's' from "yes" from the previous entries.
Has anyone seen anything like this before?  Does anyone know a solution?
 
 
Below is the code I was currently using but many options have been tried, including proc SQL;

libname TERADATA oledb Provider=msdasql dsn=datacore pwd='pass' uid=username schema=USERNAME;

proc sql;
 drop table teradata.patient_test;
quit;

data work.test;
input idcode $20. site_1 $5. site_2 $5. site_3 $5. site_4 $5.;
cards;
123                 yes                 
234                      no             
345                 no                  
456                           yes       
;
run;

data teradata.patient_test;
 set work.test;
run;

 

Forums: 

JDBC Hebrew Support

$
0
0

Hi, I'm an end user, mainly running queries on our database and using reporting services.
My company uses SQL Assistant as the default querying tool and some of the data I pull is in the Hebrew language. SQL Assistant is configured to use the Teradata.net connection with Session Character Set set to ASCII and Session Mode to DEFAULT. With this configuration I have no problems at all retrieving results and seeing hebrew characters and english characters in the appropriate fields.
I've tried switching to a better and more featured SQL editor, installing Teradata Studio and also trying the Teradata plug-in for Eclipse. Unfortunately, with both tools, when I query a table with hebrew data, I receive Latin characters in the fields content. I assume this has to do with JDBC and the way the results are parsed to the client.
I've tried different settings, changing the Charset value from UTF8 to UTF16 and to ASCII. With ASCII, which is the same setting in the SQL Assistant, I receive question marks instead of letters as the results. I also tried different combinations of fonts on the client but to no avail.
Is this a limitation with JDBC? Does this has to do with how our Teradata DB is configured and table columns are defined (I have no control of that as end user...). If so, why do ODBC and Teradata.net connections are able to handle that but not JDBC?
I would appreciate any help with the subject.

Forums: 

Load Nulls with R & JDBC Fastload Problem

$
0
0

Hi all,
I try to load some data from R directly to Teradata 14.10 via JDBC Fastload.
The data contains some Nulls for numeric values and I have not been able to load this data so far. 
The code below is an example.
Three columns are generated. One integer, one char and one integer again.
The last column will have about 25% nulls.
If you comment out line 48 no nulls will be generated and you can see that the load per se is working.
From what I read so far I understand that nulls need to be set differently and I tried to do that within the msinsert function but obvously without success.
Is this a bug? I saw some other posts on JDBC and Nulls but I am not sure that the issue is the same.
 
Any support is very apprichiated.

Ulrich

library(RJDBC)
################
#def functions
################
myinsert <- function(arg1,arg2,arg3){
  .jcall(ps,"V","setInt",as.integer(1),as.integer(arg1))
  .jcall(ps,"V","setString",as.integer(2),arg2)
  if (is.na(arg3)==TRUE) {
    .jcall(ps,"V","setNull",as.integer(3),Types.INTEGER)
  } else {
    .jcall(ps,"V","setInt",as.integer(3),as.integer(arg3))
  }
  .jcall(ps,"V","addBatch")
}


MHmakeRandomString <- function(n=1, lenght=12)
{
  randomString <- c(1:n)                  # initialize vector
  for (i in 1:n)
  {
    randomString[i] <- paste(sample(c(0:9, letters, LETTERS),
                                    lenght, replace=TRUE),
                             collapse="")
  }
  return(randomString)
}

################
#DB Connect
################
.jaddClassPath("/MyPath/TeraJDBC__indep_indep.14.10.00.17/terajdbc4.jar")
.jaddClassPath("/MyPath/TeraJDBC__indep_indep.14.10.00.17/tdgssconfig.jar")
drv = JDBC("com.teradata.jdbc.TeraDriver","/MyPath/TeraJDBC__indep_indep.14.10.00.17/tdgssconfig.jar","/MyPath/TeraJDBC__indep_indep.14.10.00.17/terajdbc4.jar")
conn = dbConnect(drv,"jdbc:teradata://neo/CHARSET=UTF8,LOG=ERROR,DBS_PORT=1025,TYPE=FASTLOAD,TMODE=TERA,SESSIONS=1","uli","m00rhuhn") 

################
#main
################

##gen test data
dim = 10000
i = 1:dim
s = MHmakeRandomString(dim,12)
j = sample(1:10000, dim)
i1 <- j %% 4 == 0
#assign some NA
j[i1] <- NaN



## set up table
dbSendUpdate(conn,"drop table foo;")
dbSendUpdate(conn,"create table foo (a int, b varchar(100),c int);")

#set autocommit false
.jcall(conn@jc,"V","setAutoCommit",FALSE)
##prepare
ps = .jcall(conn@jc,"Ljava/sql/PreparedStatement;","prepareStatement","insert into foo values(?,?,?)")

#start time
ptm <- proc.time()

## batch insert
for(n in 1:dim){ 
  myinsert(i[[n]],s[[n]],j[[n]])
}
#run time
proc.time() - ptm

#apply & commit
.jcall(ps,"[I","executeBatch")
dbCommit(conn)
.jcall(ps,"V","close")
.jcall(conn@jc,"V","setAutoCommit",TRUE)

#get some sample results
dbGetQuery(conn,"select top 100 * from foo")
dbGetQuery(conn,"select count(*) from foo")

#disconnect
dbDisconnect(conn)

 
 

Forums: 

Possible to install both 13 and 14 versions of the driver? SUSE Linux

$
0
0

Hello,
we installed the Teradata 13 ODBC driver on our SAP HANA system (SUSE Enterprise Linux) but when we try to install the 14 version it tells us the files in /usr/lib and /usr/lib64 conflict with the files from the 13 version. Is it possible to specify different directories for each version of the driver, or are we only supposed to have one version of the driver on our system and use it to connect to both Teradata 13 and 14 systems?
 
Thanks!
Minsan Sauers
 

Forums: 

JDBC URL specify failover server

$
0
0

Hi there!, we have a web application connecting to teradata using JDBC url. To handle, automatic failover I would like to specify failover server in JDBC URL. But I couldn't find any information on what attribute I can use to achieve this. Any help is greatly appreciated.
 
Current URL: jdbc:teradata://primaryserver.xyz.com/database=DB1,TMODE=TERA
 
I am looking for something like below
 
jdbc:teradata://primaryserver.xyz.com/database=DB1,TMODE=TERA secondaryserver.xyz.com/database=DB1,TMODE=TERA
 

Forums: 

ODBC Connectivity Issue

$
0
0

I have installed both 32-bit and 64-bit ODBC drivers for Teradata. I have set up the DSN for both 32-bit and 64-bit using the below odbc administrators
for 64-bit - C:\Windows\System32\odbcad32.exe ; DSN Name : DW64
for 32-bit - C:\Windows\SysWoW64\odbcad32.exe ; DSN Name : DW32
When I try DSN DW32 in my Report Builder 3.0, I get the following error:
Error [IM014] [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application
When I try to use DSN DW64 in my Report Builder 3.0, I get the following error:
Error [IM003] specified driver could not be loaded due to system error 126: The specified module could not be found. (Teradata, C:\Program Files\Teradata\Client\14.00\ODBC Driver for Teradata nt-x8664\Lib\tdata32.dll).
 
I verified that the Environment Variable Path has both the 32-bit adn 64-bit details of Teradata drivers.
I also tried uninstalling and re-installing the drivers ensuring the following order of installation has been followed :- tdicu-TeraGSS-odbcdriver
Still I'm not able to connect to Teradata from Report Builder 3.0.
Please help!
 
 
 

Forums: 

Cloudera Connector Powered by Teradata - Charset UTF8 Problem with "Special Chars like €"

$
0
0

There is a thread in database forum that it should be here: http://forums.teradata.com/forum/database/teradataimporttool-charset-problem
---
Hi.

 

I am importing data from teradata to hadoop with "Teradata Connector for Hadoop (Command Line Edition): Cloudera" v1.2:

http://downloads.teradata.com/download/connectivity/teradata-connector-for-hadoop-command-line-edition

 

I have a table like this:

 

create table testable (

 

  id int not null,

 

  value varchar(50),

 

  text varchar(200),

 

  PRIMARY KEY (id)

 

);

 

And I have inserted this data:

 

insert into testtable values (1, '#1€', 'aá');

 

insert into testtable values (2, '#2€', 'eé');

 

The import job works normally:

 

export USERLIBTDCH=/usr/lib/tdch/teradata-connector-1.2.jar

 

hadoop jar $USERLIBTDCH com.teradata.hadoop.tool.TeradataImportTool -classname com.teradata.jdbc.TeraDriver -url jdbc:teradata://teradataServer/ DATABASE=test,CHARSET=UTF8 -username dbc -password dbc -jobtype hdfs -fileformat textfile -targetpaths /temp/hdfstable -sourcetable testtable -splitbycolumn id

But the resulting file in hdfs:

 

1 #1? a?

2 #2? e?

 

How can I import "special" characters from teradata to hadoop (UTF-8)? If I use the jdbc driver directly (e.g. java program), it works ok. the problem seems to be in the connector...

Forums: 

JDBC driver version connection

$
0
0

I am setting up a demonstration in AWS using the Teradata Express V14 AWS demo AMI.  I am also trying to connect to it from OBIEE (ORacle Business Intelligence platform).  My knowledge of these tools is very limited!   OBIEE appears to natively support JDBC  connectivity to Teradata using the v13.0 JDBC driver.  Will this work with v14 express Teradata platform?
I am currently in the intelligence gathering phase and do not want to waste too much time on something that will absolutely not work without considerable config updates.

Forums: 

Can I use the Teradata ODBC Linux drivers with Mac OS X

$
0
0

Hi,
I have been using Windows and Python as a scripting interface to access Teradata. I recently switched to a Mac, and wondering if I could use the ODBC linux drivers for connecting to Teradata using Python on a Mac. 
I tried using JDBC drivers, but unfortunately couldnt find a good enough python package that supports JDBC
Any help would be appreciated !

Forums: 

Catch a Duplicate row error when executing a PreparedStatement batch request

$
0
0

Hello all ,
Thank you for reading this .
I pull data from oracle to teradata using a PreparedStatement batch request and some data are the same .
So when I execute preparedStatement.executeBatch() method , I catch an Exception which is Duplicate row error in perf.TB_C_TMP and 
the program exits on error .
So my quertion is that why duplicate row don't batch insert to teradata , can somebody help me ? 

Tags: 
Forums: 
Viewing all 445 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>