Quantcast
Channel: Teradata Downloads - Connectivity
Viewing all 445 articles
Browse latest View live

Teradata - MicroStrategy v9.4.1 Connectivity Issue

$
0
0

Hello,
We want to connect MicroStrategy and Teradata on a Red Hat (64-bit v6 Santiago) Linux machine but we are facing architecture version issue while doing so.

According to MicroStrategy, only 32-bit ODBC Teradata drivers are supported by MicroStrategy.
We have downloaded the Teradata drivers (v13.10.00.11) from the following link:
http://downloads.teradata.com/download/connectivity/odbc-driver/linux
 
After installing the drivers following components were installed in the machine:
1. tdicu-13.10.00.02-1.noarch
2. TeraGSS_redhatlinux-i386-13.10.07.09-1.i386
3. TeraGSS_suselinux-x8664-13.10.07.09-1.x86_64
4. tdodbc-13.10.00.11-1.noarch
 
But after trying to connect to Teradata using the MicroStrategy Test tool, it gives the following error:

Connect failed.

Error type: Odbc error. Odbc operation attempted: SQLDriverConnect. [81:0: on HDBC] 523 80

We have consulted the MicroStrategy team, they are of the opinion that some components of ODBC drivers are 64-bit, Thus, the issue.
https://resource.microstrategy.com/forum/ReplyListPage.aspx?id=38532

Please let us know an appropriate link for only 32-bit version of ODBC drivers.

Note: We have used the steps mentioned in the following link to install the drivers: 
https://resource.microstrategy.com/Support/MainSearch.aspx?formatted=1&tnkey=36624

Thanks in advance.

Forums: 

Accepting Default Replacement Character in UTF-8 mode - partition elimination impact

$
0
0

Context:
teradata 14.
loading utf-8 messages via jdbc connectivity
when messages containing the � replacement char for untranslatable character
with our DBS Control param 104 (AcceptReplacementCharacters) set to false
jdbc returns error 1338, 1339.
turning DBS Control param 104 (AcceptReplacementCharacters) to true would disable partition elimination for character based partition.
I do not really understand the reason provided by Paul Sinclair -
http://developer.teradata.com/blog/paulsinclair/2012/07/td-14-0-the-other-partitioning-enhancements  - might be due to my limited knowledge of things
At the moment my perception is:
since once the replacement character is accepted into the Teradata environment
its value remains to be evaluated a 'replacement character' for ever independently of whether the set of untranslatable characterS for a given character set changes over the time.
this should then no be a consideration to cut the partition elimination of character based partition.
Other point of views / corrections expected
thanks
rgds - JL D
 

Forums: 

Problem with ANSI TIME type and ODBC driver

$
0
0

Hi guys,
I have a problem with TIME datatype and my ODBC drivers.
In ODBC driver I have DateTimeFormat=III and enabled DisableParsing.
When I try to run this query:
INSERT INTO  aidar.time_ansi 
(
"ID_1","COL_1","COL_2"
)
SELECT 
    last_key_1,
    (
    CASE
        WHEN f_1=0 THEN col_1 ELSE NULL
    END) col_1,
        (
    CASE
        WHEN f_2=0 THEN col_2 ELSE NULL
    END) col_2
    FROM 
    (
    SELECT 
        CAST(SUBSTR(last_key_1,11) AS INTEGER) last_key_1,
        CAST(SUBSTR(c_1,11) AS TIME(3)) col_1,
        SUBSTR(ff_1,11) f_1,
        SUBSTR(c_2,11) col_2,
        SUBSTR(ff_2,11) f_2
        FROM
        (
        SELECT
            OP_ROOT_KEY_ROWID r_rowid,
            MAX(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || CAST(ID_1_NEW AS VARCHAR(70))) last_key_1,
            MIN(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || OP_CODE) first_op_in_chain,
            MAX(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || OP_CODE) last_op_in_chain,
            MAX(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || CAST(COL_1_NEW AS VARCHAR(70))) c_1,
            MAX(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || (
            CASE
                WHEN COL_1_NEW IS NOT NULL THEN '0'
                WHEN COL_1_OLD IS NULL THEN NULL ELSE  '1'
            END)) ff_1,
                MAX(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || COL_2_NEW) c_2,
                MAX(CAST(CAST(OP_NUM_IN_TX AS FORMAT'-9(10)') AS CHAR(10)) || (
            CASE
                WHEN COL_2_NEW IS NOT NULL THEN '0'
                WHEN COL_2_OLD IS NULL THEN NULL ELSE  '1'
            END)) ff_2
            FROM aidar.time_ansi_LOG
            GROUP BY OP_ROOT_KEY_ROWID) c
        WHERE  SUBSTR( first_op_in_chain,11)='I'
            AND  SUBSTR(last_op_in_chain,11)<>'D'
    ) a
 
I get an error from Teradata "Invalid operation for DateTime or Interval.".
If I remove CAST() function from CAST(SUBSTR(c_1,11) AS TIME(3)) col_1, SQL statement processed OK. But I want to use cast, because I want one behaviour in different SQL statements.
COL_1 is in ANSI TIME datatype.
So as I understand ODBC driver presents my CAST AS TIME(3) as CAST AS INTEGER FORMATTED '99:99:99', because it has DateTimeFormat=III. But I thought that "Disable Parsing" setting will change this behaviour, but unfortunately it doesn't. So could you tell me please how I can run this query without changing DateTimeFormat? Or maybe it's impossible...
 
Thank you.

Forums: 

Pulling "SHOW VIEW" data using JDBC

$
0
0

Hello all,
I'm currently working on a project where I am pulling all of the CREATE/REPLACE view DDL from our data warehouse using Java code.  I create a result set by querying the DBC.tables table and use the RequestText field to process the SQL unless the length of the SQL is greater than the RequestText field can hold.  When that occurs, I run a seperate query issuing a "SHOW VIEW DB.ViewName" command to Teradata.  This works great unless the length of the DDL is > 32,000. 
It appears that the ResultSet defaults to a data type of LONGVARCHAR(32000) for the result of the SHOW VIEW command, and anything that is larger than that is truncated at character 32,000.
Does anyone know how I can avoid this truncation? 
I assume that if I could change the data type of the ResultSet returned from TD to LONGVARCHAR(64000), it would fix the problem.  Anyone know how to do that or know of some other  solution?
 
Thanks in advance!
 
Ben

Forums: 

JDBC Fastload Error Tables

$
0
0

Is there a way to redirect where the error tables are created in a JDBC Fastload operation?
My scenario is loading a staging table in a schema/database where the ID doing the load does not have CREATE TABLE privs. As per company standards the error tables are typically created in a separate database.
As an example, my SQL statement is of the form:
INSERT INTO STG.MYTABLE (?,?,?);
And I would like to have the error tables created in the S_UTL schema.
Is this possible with JDBC Fastload?
 
 

Forums: 

Executing Teradata sql script in Shell file

$
0
0

Hi,
I am new to Teradata.
Please help me to know how to execute the simple Teradata script from LINUX shell script.
I created  a shell file Sample.sh and the code inside the shell file is below:
-----------------------
#! /bin/bash
.logon 127.0.0.1/DBC,DBC;
select * from employee where emp_id = $1;
.logoff;
.EXIT;
-----------------------
 I execute the script ./Sample.sh 20 > output.txt ( I am passing the value 20 as emp_id )
By executing the above command, I would like to print the output of the sql query in the file 'output.txt'.
But it is not working.
I am getting the below error:
".logon command not found
.logoff command not found"
Please someone help me on this.
 
 
 

Forums: 

Problem with Connector for Hadoop 1.3 and HCatalog

$
0
0

I'm trying to use TDCH 1.3 command line edition to import from TD to HCatalog.  I consistently get an exception.  I've tried various versions of Hive and HCatalog with no success: CDH4.5 with Hive 0.11, CDH5.0 with Hive 0.12, and HDP 2.1 with Hive 0.13.  All throw the same exception.
 
Here is my job setup:

cat > _td.hql <<EOF
create database if not exists td_gnis;
drop table if exists td_gnis.lakes;
create table td_gnis.lakes (
  Feature_ID       STRING,
  Feature_name     STRING,
  Primary_lat_dec  DOUBLE,
  Primary_lon_dec  DOUBLE
)
STORED AS TEXTFILE;
EOF

hive -f _td.hql

hadoop jar $TDCH_JAR com.teradata.connector.common.tool.ConnectorImportTool \
-libjars $LIB_JARS \
-url jdbc:teradata://192.168.11.200/database=vmtest \
-username vmtest \
-password vmtest \
-classname com.teradata.jdbc.TeraDriver \
-fileformat textfile \
-jobtype hcat \
-method split.by.amp \
-sourcetable gnis \
-sourcefieldnames "Feature_ID,Feature_name,Primary_lat_dec,Primary_lon_dec" \
-targetdatabase td_gnis \
-targettable lakes \
-targetfieldnames "Feature_ID,Feature_name,Primary_lat_dec,Primary_lon_dec" \
-nummappers 2

 
And this is the exception that gets thrown:

14/06/25 15:18:30 INFO hive.metastore: Trying to connect to metastore with URI thrift://hdp2.jri.revelytix.com:9083
14/06/25 15:18:30 INFO hive.metastore: Connected to metastore.
14/06/25 15:18:30 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
14/06/25 15:18:30 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/06/25 15:18:31 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByAmpProcessor starts at:  1403723911542
14/06/25 15:18:31 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByAmpProcessor ends at:  1403723911542
14/06/25 15:18:31 INFO processor.TeradataInputProcessor: the total elapsed time of input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByAmpProcessor is: 0s
14/06/25 15:18:31 INFO tool.ConnectorImportTool: com.teradata.connector.common.exception.ConnectorException: java.lang.NullPointerException
	at org.apache.hcatalog.data.schema.HCatSchema.get(HCatSchema.java:99)
	at com.teradata.connector.hcat.utils.HCatSchemaUtils.getTargetFieldsTypeName(HCatSchemaUtils.java:37)
	at com.teradata.connector.hcat.processor.HCatOutputProcessor.outputPreProcessor(HCatOutputProcessor.java:70)
	at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:88)
	at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:48)
	at com.teradata.connector.common.tool.ConnectorImportTool.run(ConnectorImportTool.java:57)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at com.teradata.connector.common.tool.ConnectorImportTool.main(ConnectorImportTool.java:694)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

	at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:103)
	at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:48)
	at com.teradata.connector.common.tool.ConnectorImportTool.run(ConnectorImportTool.java:57)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at com.teradata.connector.common.tool.ConnectorImportTool.main(ConnectorImportTool.java:694)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

 

Forums: 

TDCH CLI 1.3 problems with export from hdfs to teradata13.10

$
0
0

Hi,
 
I am trying to export data from hdfs to teradata using teradata connector cli version 1.3. I am contsantly hit with below error while executing in batchinsert mode. 
--- Error: com.teradata.hadoop.exception.TeradataHadoopSQLException: java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 14.00.00.39] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
Teradata version is 13.10
Hadoop version is HDP2.1
TDCH connector - 1.3
export command used is as below. There are intotal 1.3 billion records in the hdfs file.

hadoop jar $TDCH_JAR com.teradata.hadoop.tool.TeradataExportTool -url jdbc:teradata://1.1.1.1/DATABASE=TESTDB -username user1 -password pwd123 -jobtype hdfs -sourcepaths /apps/hive/warehouse/tab1 -nummappers 100 -separator '|' -targettable td_tab1
echo $TDCH_JAR

/usr/lib/tdch/teradata-connector-1.3.jar
Error: com.teradata.connector.common.exception.ConnectorException: java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 14.00.00.39] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
        at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:147)
        at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:136)
        at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:253)
        at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatch(TDPreparedStatement.java:2352)
        at com.teradata.connector.teradata.TeradataBatchInsertOutputFormat$TeradataRecordWriter.write(TeradataBatchInsertOutputFormat.java:143)
        at com.teradata.connector.teradata.TeradataBatchInsertOutputFormat$TeradataRecordWriter.write(TeradataBatchInsertOutputFormat.java:110)
        at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.write(ConnectorOutputFormat.java:107)
        at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.write(ConnectorOutputFormat.java:65)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
        at com.teradata.connector.common.ConnectorMMapper.map(ConnectorMMapper.java:129)
        at com.teradata.connector.common.ConnectorMMapper.run(ConnectorMMapper.java:117)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: com.teradata.jdbc.jdbc_4.util.JDBCException: [Teradata JDBC Driver] [TeraJDBC 14.00.00.39] [Error 1339] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. The parameter set was not executed and should be resubmitted individually using the PreparedStatement executeUpdate method.
        at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:93)
        at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:63)
        at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.handleRunException(PreparedBatchStatementController.java:95)
        at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:129)
        at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:57)
        at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:381)
        at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:233)

Can someone please guide, as to how to get this export working? I have tried it numerouse times and everytime a different number of records get inserted before the entire job errors out.
I also get exactly same error when I try to use horton works connector for teradata on HDP2.0 platform.
 
Thanks,
Anand

Forums: 

How to Connect Teradata with Informatica ?

$
0
0

I have installed teradata express 14.10 edition in vmware player. I can logon using dbc/dbc and also can access it using Teradata express studio from windows 7. I have downloaded Informatica from https://edelivery.oracle.com on  windows 7. But now I am bit perplexed , as don't know how to connect the informatica with Teradata 14.00 in VMware player. As while installing it is not showing the database type Teradata. Can anyone please guide me with the steps to download and install informatica for teradata
:( .
 
Thanks,
Richa

Forums: 

Any plans for a TD GeoImport / Export tool for 64-bit Windows ?

$
0
0

I was hoping to use the TD-GeoImport tool to get some data into Teradata, but alas I have a 64 bit machine.  I should have read the "32 bit" in the label first, before getting this error:
Can't load IA 32-bit .dll on a AMD 64-bit platform
Any plans to release this tool for 64 bit machines ?
Anyone loading with 64 bit machines and using the GDAL ogr library ?  
 

Forums: 

teradata lobs are not allowed to be selected in record or indicator modes + SSIS error

$
0
0

I am creating SSIS Package with Teradata OLEDB provider for teradata 14.00.0.1.
my table has column with datatype "Character Large Object". When I try to use this table in my SSIS, it throws following error:
 Exception from HRESULT: 0xC0202009

Error at dft - AREA [ole_src_AREA [22]]: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.

An OLE DB record is available.  Source: "OLE DB Provider for Teradata" Hresult: 0x80004005  Description: "[Teradata Database] LOBs are not allowed to be selected in Record or Indicator modes. ".

 

Any solution for this issue?

 

 

 

Forums: 

Regarding installation of teradata tool in a personnel computer

$
0
0

Hi Folks,
Recently i started my carrier in teradata, i would like to install teredata in my personel lap tap while selecting datasource i am getting 10061 WSA EConnRefused error. Could you please provide the solution for this issue.
 
Regards,
Manoj 
 
 

Forums: 

Problem with Reconnect using .NET and ODBC drivers

$
0
0

Hi All,
I am facing an issue with reconnect in .NET and ODBC connectors from SQLA when i do a DBS restart. I am seeing this in TD Expres 15.0 SLES 10 40GB  VM.
 
Teradata .NET Data Provider :
Issue here is that after a successful DBS restart, SQLA is not maintaining the current session context.
'sel session' query returns different session number and it keeps on incrementing with each retry of the query from there on.
To confirm this, I tried to reproduce this issue with GTT which should sustain a typical DBS restart.
Steps I used to reproduce this issue:
1) Created a GTT and inserted a row in it.

create GLOBAL TEMPORARY TABLE gtt_restart(i int,j char(10)) 
unique primary index(i) on commit preserve rows;

ins gtt_restart(1,'a');

2) Issued a select query 'sel * from gtt_restart' and was able to succesfully fetch the data. 'Sel Session' query and sessioninfo view display the sesion number as '1010' 
3) Performed a DBS restart using tpareset and waited till TD is up.
4) Re executed the query 'sel * from gtt_restart' . It failed with "SELECT Failed [10001] Cannot close an Active Request. Please Abort the Request" . Re executed the query again and see that the sesion gets reconnected but there is no data in the GTT. I have tried to verify the session number and this time 'Sel Session' query  displays the sesion number as '1012' and sessioninfo table has two rows with old sesion number '1010' and new sesion number '1012'. 
If i try to reissue the 'sel session' query or fetch session info from 'sessioninfo' view, The session number gets incremented. 
ODBC Driver: (Have configured the DSN with Enable Reconnect)
Using ODBC driver in SQLA, After a DBS restart, When i tried to re-run any query in SQLA window, It throws an error msg box '10054 WSA E ConnReset: Connection reset by peer' and the status bar displays executing query and remained like that forever.
This is working fine through BTEQ and GTT is able to retain data(With 2825 error) after a DBS restart.
 
OS :
Windows 8.1 Pro
DBS package versions used:
TDExpress15.0.0.8_Sles10:~ # pdepath -i

PDE: 15.00.00.08

TDGSS: 15.00.00.07

TDBMS: 15.00.00.07

TGTW: 15.00.00.00

RSG: 15.00.00.00

PDEGPL: 15.00.00.08

 

Client versions used:(TTU 15.00 for windows)

.NET Data Provider for Teradata 14.11.0.1  (This is the version from TTU 15.00 client setup package)

ODBC Driver for Teradata 15.00.0.1

ODBC Driver for Teradata nt-x8664 15.00.0.1

Teradata SQL Assistant 15.00

 

Forums: 

Java code with a JDBC connection to Teradata via Message Broker

$
0
0

I'm a bit of a novice in this arena, so I hope this doesn't seem like an odd question
 
We currently are using Message Broker to interface HL7 transactions from our mainframe system.  Currently, there is a message broker interface that interfaces these transactions to Sybase using ODBC.  From the information I've read, you cannot do something similar to Teradata.  What we need to do is read the HL7 transactions using Java and JDBC, and call a stored procedure in Teradata to load the transactions into a staging table.
Does anyone have sample code that does this?
 
Thanks!

Forums: 

How to connect Aster using Datastage

$
0
0

Hi,
I am currently using Datastage v9.1 and would like to know if I can use Teradata Connector to connect Teradata Aster.
If not do we have any alternate db connectors/stages to connect to Aster. Also please let me know the configuration details that I have to setup before connecting to Aster db.
 
Appreciate your help on this.
 
Regards,
Arun

Forums: 

Excel VBA code to connect teradata SQL server and execute query

$
0
0

I am getting error when trying to use vba code to connect teradata sql through excel 2010. I also want to execute records to excel. Can someone help me with the query for the same

Forums: 

Informatica 9.1 with Teradata V14 using TTU 13

$
0
0

Has anyone used Informatica 9.1 with Teradata V14 using TTU 13? Does it work flawlessly?

Forums: 

No transaction rollback when using TransactionScope without Complete()

$
0
0

Hello,
I have a simple unit test which is failing the final assertion:

        [TestMethod]
        public void ExecuteNonQueryWithCommand_ShouldUseTransaction()
        {
            const string insertString = "insert into Region values (77, 'Elbonia')";
            const string countString = "select count(*) from Region";
            const string deleteString = "delete from Region where RegionId = 77";

            TdConnection cn = new TdConnection(db.ConnectionString);
            cn.Open();

            TdCommand insertCmd = new TdCommand(insertString, cn);
            TdCommand countCmd = new TdCommand(countString, cn);
            TdCommand deleteCmd = new TdCommand(deleteString, cn);

            int initialRows = (int)countCmd.ExecuteScalar();

            using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew))
            {
                int rows = insertCmd.ExecuteNonQuery();
                Assert.AreEqual(1, rows);
            }

            int postScopeRows = (int)countCmd.ExecuteScalar();
            deleteCmd.ExecuteNonQuery();

            cn.Close();
            Assert.AreEqual(initialRows, postScopeRows);
        }

I was under the impression per this MSDN documentation linked below (see the remarks section) that the transaction should roll back due to the lack of a scope.Complete() statement within the using block.  Does Teradata support this feature?  
http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope(v=vs.110).aspx 

Forums: 

Can a non-appliance server be connected to the BYNET?

$
0
0

I'm looking at the Teradata Appliance for SAS High Performance Analytics, which I see is connected directly to the BYNET. Is it possible to connect other non-Teradata servers to the BYNET for connectivity, or is this specific to Teradata appliances?

Forums: 

ADO, ODBC, and Unicode

$
0
0

I'm using VBA and the Microsoft ActiveX Data Objects 2.6 Library to access our Teradata 14.00.05.03 server using Teradata ODBC Driver 15.00.00.01
I have a table that contains unicode characters. When I attempt to bring those back through the connection the unicode characters are converted to ansii and I end up with a bunch of arrows and question marks. I've defined CharacterSet=UTF16 in my connection string. I've tried different combinations of whatnot and have had no luck. No matter what I do ADO shoves the field into type adchar instead of the unicode compliant adwchar or advarwchar.
I would consider workarounds where, perhaps, I convert the varchar to binary in the SQL string, and then deconvert the binary into UTF16, but that's ugly and I'm not sure if Teradata is capable of converting unicode varchar to binary.
 

Forums: 
Viewing all 445 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>