I'm executing a query that gets a couple thousands rows as result, and the customer needs a row showing the sum totals of certain numeric columns. I've achieved this by using group by grouping sets, but this function supports up to 32 columns that are not in aggregate functions. My problem is that I have nearly 45 columns that I have to return, being only 10 that I leave out of the group by because of aggregate functions.
Original query was something like this:
select
o.Name,
ci.Id,
ci.OriginId,
ci.Varchar1,
ci.Varchar2,
ci.Varchar3,
ci.Varchar4,
ci.Varchar5,
ci.Varchar6,
ci.Varchar7,
ci.Varchar8,
ci.Varchar9,
ci.Varchar10,
ci.Varchar11,
ci.Varchar12,
ci.Varchar13,
ci.Varchar14,
ci.Varchar15,
ci.Varchar16,
ci.Varchar17,
ci.Varchar18,
ci.Varchar19,
ci.Varchar20,
sum(ci.Decimal1) as Decimal1,
sum(ci.Decimal1) as Decimal2,
sum(ci.Decimal1) as Decimal3,
sum(ci.Decimal1) as Decimal4,
sum(ci.Decimal1) as Decimal5,
sum(ci.Decimal1) as Decimal6,
sum(ci.Decimal1) as Decimal7,
sum(ci.Decimal1) as Decimal8,
sum(ci.Decimal1) as Decimal9,
sum(ci.Decimal1) as Decimal10,
ci.Date1,
ci.Date2,
ci.Date3,
ci.Date4,
ci.Date5,
ci.Date6,
ci.Date7,
ci.Date8,
ci.Date9,
ci.Date10
from
Items ci
inner join Origins o
on ci.OriginId = o.Id
group by grouping sets((
o.Name,
ci.Id,
ci.OriginId,
ci.Varchar1,
ci.Varchar2,
ci.Varchar3,
ci.Varchar4,
ci.Varchar5,
ci.Varchar6,
ci.Varchar7,
ci.Varchar8,
ci.Varchar9,
ci.Varchar10,
ci.Varchar11,
ci.Varchar12,
ci.Varchar13,
ci.Varchar14,
ci.Varchar15,
ci.Varchar16,
ci.Varchar17,
ci.Varchar18,
ci.Varchar19,
ci.Varchar20,
ci.Date1,
ci.Date2,
ci.Date3,
ci.Date4,
ci.Date5,
ci.Date6,
ci.Date7,
ci.Date8,
ci.Date9,
ci.Date10), ())
I've tried to split the query in two, so that the amount of columns in the group by doesn't reach the maximum available. If I execute each query separated I get the desired results, but if I union them I have an error (can't convert nvarchar to numeric).
The result was something like this:
select
o.name
ci.Id,
ci.OriginId,
sum(ci.Decimal1) as Decimal1,
sum(ci.Decimal1) as Decimal2,
sum(ci.Decimal1) as Decimal3,
sum(ci.Decimal1) as Decimal4,
sum(ci.Decimal1) as Decimal5,
sum(ci.Decimal1) as Decimal6,
sum(ci.Decimal1) as Decimal7,
sum(ci.Decimal1) as Decimal8,
sum(ci.Decimal1) as Decimal9,
sum(ci.Decimal1) as Decimal10,
ci.Date1,
ci.Date2,
ci.Date3,
ci.Date4,
ci.Date5,
ci.Date6,
ci.Date7,
ci.Date8,
ci.Date9,
ci.Date10
from
Items ci
inner join Origins o
on ci.OriginId = o.Id
group by grouping sets((
o.Name,
ci.Id,
ci.OriginId,
ci.Date1,
ci.Date2,
ci.Date3,
ci.Date4,
ci.Date5,
ci.Date6,
ci.Date7,
ci.Date8,
ci.Date9,
ci.Date10), ())
union
select
o.Name,
ci.Id,
ci.OriginId,
ci.Varchar1,
ci.Varchar2,
ci.Varchar3,
ci.Varchar4,
ci.Varchar5,
ci.Varchar6,
ci.Varchar7,
ci.Varchar8,
ci.Varchar9,
ci.Varchar10,
ci.Varchar11,
ci.Varchar12,
ci.Varchar13,
ci.Varchar14,
ci.Varchar15,
ci.Varchar16,
ci.Varchar17,
ci.Varchar18,
ci.Varchar19,
ci.Varchar20
from
Items ci
inner join Origins o
on ci.OriginId = o.Id
group by grouping sets((
o.name,
ci.Id,
ci.OriginId,
ci.Varchar1,
ci.Varchar2,
ci.Varchar3,
ci.Varchar4,
ci.Varchar5,
ci.Varchar6,
ci.Varchar7,
ci.Varchar8,
ci.Varchar9,
ci.Varchar10,
ci.Varchar11,
ci.Varchar12,
ci.Varchar13,
ci.Varchar14,
ci.Varchar15,
ci.Varchar16,
ci.Varchar17,
ci.Varchar18,
ci.Varchar19,
ci.Varchar20), ())
Another way (if possible), would be to drop the group by grouping sets in SQL and generate a row with C#, since the result of the query is recieved by a IEnumerable, but I don't know if a SUM function is available.
Any advice will be appreciated.
Thanks in advance.
If what you are trying to do is basically all data plus total row, consider the following approach. Do not group by grouping set that include all non-aggregated columns, instead group by row ID (existing one, which should be unique within all data rows, or artificial, created with row_number() function). Also consider joining auxiliary tables after total is calculated.
The example follows.
Setup sample data:
declare #origs table (id int, name varchar(20));
insert into #origs values (1, 'orig1'), (2, 'orig2');
declare #items table (
id int, orig_id int,
column1 varchar(20), column2 varchar(20),
value1 float, value2 float);
insert into #items values
(1, 1, 'c1.1', 'c2.1', 100, 10)
,(2, 1, 'c1.2', 'c2.2', 200, 20)
,(3, 2, 'c1.3', 'c2.3', 300, 30);
The query below returns all data plus total row the way you are trying to do it:
select i.id, o.name as orig, i.column1, i.column2, sum(i.value1) val1, sum(i.value2) val2
from #items i
join #origs o on o.id = i.orig_id
group by grouping sets ((i.id, o.name, i.column1, i.column2), ());
The output is:
id orig column1 column2 val1 val2
----- ----- -------- -------- ----- -----
1 orig1 c1.1 c2.1 100 10
2 orig1 c1.2 c2.2 200 20
3 orig2 c1.3 c2.3 300 30
NULL NULL NULL NULL 600 60
Compare it to the next query, that groups data by a single column. Also auxiliary table #origs is joined after data is grouped.
;with items as (
select
case grouping(id) when 0 then max(id) else NULL end id,
case grouping(id) when 0 then max(orig_id) else NULL end orig_id,
case grouping(id) when 0 then max(column1) else NULL end column1,
case grouping(id) when 0 then max(column2) else NULL end column2,
val1 = sum(value1),
val2 = sum(value2)
from #items
group by rollup (id)
)
select i.id, o.name as orig, i.column1, i.column2, i.val1, i.val2
from items i
left join #origs o on o.id = i.orig_id;
Output is the same:
id orig column1 column2 val1 val2
----- ----- -------- -------- ----- -----
1 orig1 c1.1 c2.1 100 10
2 orig1 c1.2 c2.2 200 20
3 orig2 c1.3 c2.3 300 30
NULL NULL NULL NULL 600 60
Given only a few thousand rows, I'd use a stored procedure to get the results without the grand totals into a temp table or table valued variable, and then return the results as a UNION ALL of that table plus a grand total over the top of it.
Related
I am trying to do the following but I cannot manage to get it right yet :(.
I have these tables:
table1 -> tb1_id, tb1_name
Sample Data:
--------------
1 group1
2 group2
3 group3
4 group4
5 group5
table2 -> tb2_id, tb2_sector, tb2_tb3_id
Sample Data:
--------------
1 alpha 1
2 beta 2
3 gamma 2
4 delta 2
5 epsilon 4
table3 -> tb3_id, tb3_mid, tb3_section
Sample Data:
--------------
1 234 alpha,beta,gama,delta
This is the output that I am looking for:
Name Count %
------ ----- -----
group1 1 25%
group2 3 75%
group3 0 0%
group4 0 0%
group5 0 0%
Basically I need a split a column value delimited by a comma (tb3_section in table3) and then find the right group for each value (table2 gives me the group id to link with table1) and then do a total count by group and get the percentage (assuming total is 100%).
This is the query I tried so far:
I searched for split value samples and found one that does the split by creating a numbers table first:
create table numbers (
`n` INT(11) SIGNED
, PRIMARY KEY(`n`)
)
INSERT INTO numbers(n) SELECT #row := #row + 1 FROM
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t2,
(SELECT 0 UNION ALL SELECT 1) t8,
(SELECT #row:=0) ti;
Afterwards, I did this:
select tb3_section, count(1) from (
select
tb3_mid,
substring_index(
substring_index(tb3_section, ',', n),
',',
-1
) as tb3_section from table3
join numbers
on char_length(tb3_section)
- char_length(replace(tb3_section, ',', ''))
>= n - 1
) tb3_section_dashboard
group by 1
This doesn't give me the group count. Just does the split of tb3_section but doesn't give me the correct count and equivalent percentage. Any ideas will be much appreciate it thanks a lot.
LATEST UPDATE
First of all, I would like to thanks #eggyal for pointing me to the right direction and #Shadow for despise knowing that I was not taking the best approach, he came up with a quick fix to my problem. I managed to change the approach and removed the comma delimited values from table3. Instead now I add multiple rows for each new value (and added a constraint to avoid duplicates).
Now table3 looks like:
Sample Data:
--------------
1 234 alpha
2 234 beta
3 234 gama
4 234 delta
5 235 alpha
Here is the query I have taken from #shadow sample:
SELECT t1.tb1_name, COUNT(t3.tb3_section) AS no_per_group,
COUNT(t3.tb3_section) / t4.no_of_groups AS percentage
FROM t1 left
JOIN t2 ON t1.tb1_id=t2.tb2_tb3_id
INNER JOIN t3 ON t2.tb2_sector=t3.tb3_section>0
JOIN (SELECT COUNT(*) AS no_of_groups
FROM t3 INNER JOIN t2 ON t2.tb2_sector=t3.tb3_section>0) t4
GROUP BY t1.tb1_name
Instead of using find_in_set now I use = to match the exact value.
Now I get something like the following but the percentage looks odd and I miss a group that doesn't have a match:
Name no_per_group percentage
----- ------------- ----------
group1 2 0.1053
group3 3 0.1579
group4 3 0.1579
group5 3 0.1579
Although still I need something like:
Name Count %
------ ----- -----
group1 1 25%
group2 3 75%
group3 0 0%
group4 0 0%
group5 0 0%
Notice that if there is no match in a group, I still need to show that group.
Because I have thousands of records which are different from each other, I need to add another condition: where tb3_mid=234 . Likes this, the results are using to tb3_mid.
The best solution would be to redesign your table structure and move the data in the delimited values list to a separate table.
The quick solution is to utilise MySQL's find_in_set() function.
To get the total count of entries in the messages table (table3):
select count(*) as no_of_groups
from t3 inner join t2 on find_in_set(t2.tb2_sector,t3.tb3_section)>0
To get the counts per group, add a join to table1 and group by group name. To calculate the percentage, add the above query as a subquery:
select t1.tb1_name, count(t3.tb3_section) as no_per_group, count(t3.tb3_section) / t4.no_of_groups as percentage
from t1 left join t2 on t1.tb1_id=t2.tb2_tb3_id
inner join t3 on find_in_set(t2.tb2_sector,t3.tb3_section)>0
join (select count(*) as no_of_groups
from t3 inner join t2 on find_in_set(t2.tb2_sector,t3.tb3_section)>0) t4 --no join condition makes a Cartesian join
group by t1.tb1_name
i am new to LINQ and joins, so Please forgive me if I am asking it wrong.
I have two tables
Table1
id name date
1 Mike 20-10-15
2 John 21-10-15
3 Sam 23-10-15
Table2
id name date
1 Ashle 19-10-15
2 Lily 21-10-15
3 Jeni 22-10-15
4 April 23-10-15
I need 5 records using Joins and should be orderby Date, most recent records.
Can you guys help me, I really need to figure out how Joins works with orderby.
Thanks
EDIT:
They are two different tables so no foreign key, so I think I can't use Join, so so far what I have done is like this
var combinddata = (from t1 in db.Table1
select t1.id)
.Concat(from t2 in db.Table2
select t2.id);
I don't know how to get only 5 records how to compare records from both tables on DateTime base.
Output should be
Sam
April
Jeni
John
Lily
You can concatenate equal anonymous types from different tables. If you also select the dates, you can sort by them, in descending order, and take the first 5 records:
Table1.Select (t1 =>
new
{
Id = t1.Id,
Name = t1.Name,
Date = t1.Date
}
).Concat(
Table2.Select (t2 =>
new
{
Id = t2.Id,
Name = t2.Name,
Date = t2.Date
}
))
.OrderByDescending (x => x.Date).Take(5)
Note that this gives precedence to items in Table1. If item 5 and 6 in the concatenated result are on the same date, but from Table1 and Table2, respectively, you only get the item from Table1.
If you want, you can select only the names from this result, but I assume that your output only shows the intended order of record, not the exact expected result.
var query =
from Table1 in table1
join Table2 in table2 on table1.id equals table2.id
orderby table1.date ascending
select table1.date;
Try this way
var combinddata = (from t1 in db.Table1
select t1.Name)
.Concat(from t2 in db.Table2
select t2.Name).OrderByDescending(x => x.date).Take(5);
getName_as_Rows is an array which contains some names.
I want to set an int value to 1 if record found in data base.
for(int i = 0; i<100; i++)
{
using (var command = new SqlCommand("select some column from some table where column = #Value", con1))
{
command.Parameters.AddWithValue("#Value", getName_as_Rows[i]);
con1.Open();
command.ExecuteNonQuery();
}
}
I am looking for:
bool recordexist;
if the above record exist then bool = 1 else 0 with in the loop.
If have to do some other stuff if the record exist.
To avoid making N queries to the database, something that could be very expensive in terms of processing, network and so worth, I suggest you to Join only once using a trick I learned. First you need a function in your database that splits a string into a table.
CREATE FUNCTION [DelimitedSplit8K]
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "zero base" and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT 0 UNION ALL
SELECT TOP (DATALENGTH(ISNULL(#pString,1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT t.N+1
FROM cteTally t
WHERE (SUBSTRING(#pString,t.N,1) = #pDelimiter OR t.N = 0)
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY s.N1),
Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000))
FROM cteStart s
GO
Second, concatenate your 100 variables into 1 string:
"Value1", "Value 2", "Value 3"....
In Sql Server you can just join the values with your table
SELECT somecolumn FROM sometable t
INNER JOIN [DelimitedSplit8K](#DelimitedString, ',') v ON v.Item = t.somecolumn
So you find 100 strings at a time with only 1 query.
Use var result = command.ExecuteScalar() and check if result != null
But a better option than to loop would be to say use a select statement like
SELECT COUNT(*) FROM TABLE WHERE COLUMNVAL >= 0 AND COLUMNVAL < 100,
and run ExecuteScalar on that, and if the value is > 0, then set your variable to 1.
I have the following output with me from multiple tables
id b c b e b g
abc 2 123 3 321 7 876
abd 2 456 3 452 7 234
abe 2 0 3 123 7 121
abf 2 NULL 3 535 7 1212
Now I want to insert these values into another table and the insert query for a single command is as follows:
insert into resulttable values (id,b,c), (id,b,e) etc.
For that I need to do a select such that it gives me
id,b,c
id,b,e etc
I dont mind getting rid of b too as it can be selected using c# query.
How can I achieve the same using a single query in sql. Again please note its not a table its an output from different tables
My query should look as follows: from the above I need to do something like
select b.a, b.c
union all
select b.d,b.e from (select a,c,d,e from <set of join>) b
But unfortunately that does not work
INSERT resulttable
SELECT id, b, c
FROM original
UNION
SELECT id, b, e
FROM original
Your example has several columns named 'b' which isn't allowed...
Here, #tmporigin refers to your original query that produces the data in the question. Just replace the table name with a subquery.
insert into resulttable
select
o.id,
case a.n when 1 then b1 when 2 then b2 else b3 end,
case a.n when 1 then c when 2 then e else g end
from #tmporigin o
cross join (select 1n union all select 2 union all select 3) a
The original answer below, using CTE and union all requiring CTE evaluation 3 times
I have the following output with me from multiple tables
So set that query up as a Common Table Expression
;WITH CTE AS (
-- the query that produces that output
)
select id,b1,c from CTE
union all
select id,b2,e from CTE
union all
select id,b3,g from CTE
NOTE - Contrary to popular belief, your CTE while conveniently written once, is run thrice in the above query, once for each of the union all parts.
NOTE ALSO that if you actually name 3 columns "b" (literally), there is no way to identify which b you are referring to in anything that tries to reference the results - in fact SQL Server will not let you use the query in a CTE or subquery.
The following example shows how to perform the above, as well as (if you show the execution plan) revealing that the CTE is run 3 times! (the lines between --- BELOW HERE and --- ABOVE HERE is a mock of the original query that produces the output in the question.
if object_id('tempdb..#eav') is not null drop table #eav
;
create table #eav (id char(3), b int, v int)
insert #eav select 'abc', 2, 123
insert #eav select 'abc', 3, 321
insert #eav select 'abc', 7, 876
insert #eav select 'abd', 2, 456
insert #eav select 'abd', 3, 452
insert #eav select 'abd', 7, 234
insert #eav select 'abe', 2, 0
insert #eav select 'abe', 3, 123
insert #eav select 'abe', 7, 121
insert #eav select 'abf', 3, 535
insert #eav select 'abf', 7, 1212
;with cte as (
---- BELOW HERE
select id.id, b1, b1.v c, b2, b2.v e, b3, b3.v g
from
(select distinct id, 2 as b1, 3 as b2, 7 as b3 from #eav) id
left join #eav b1 on b1.b=id.b1 and b1.id=id.id
left join #eav b2 on b2.b=id.b2 and b2.id=id.id
left join #eav b3 on b3.b=id.b3 and b3.id=id.id
---- ABOVE HERE
)
select b1, c from cte
union all
select b2, e from cte
union all
select b3, g from cte
order by b1
You would be better off storing the data into a temp table before doing the union all select.
Instead of this which does not work as you know
select b.a, b.c
union all
select b.d,b.e from (select a,c,d,e from <set of join>) b
You can do this. Union with repeated sub-select
select b.a, b.c from (select a,c,d,e from <set of join>) b
union all
select b.d, b.e from (select a,c,d,e from <set of join>) b
Or this. Repeated use of cte.
with cte as
(select a,c,d,e from <set of join>)
select b.a, b.c from cte b
union all
select b.d, b.e from cte b
Or use a temporary table variable.
declare #T table (a int, c int, d int, e int)
insert into #T values
select a,c,d,e from <set of join>
select b.a, b.c from #T b
union all
select b.d, b.e from #T b
This code is not tested so there might be any number of typos in there.
I'm not sure if I understood Your problem correctly, but i have been using something like this for some time:
let's say we have a table
ID Val1 Val2
1 A B
2 C D
to obtain a reslut like
ID Val
1 A
1 B
2 C
2 D
You can use a query :
select ID, case when i=1 then Val1 when i=2 then Val2 end as Val
from table
left join ( select 1 as i union all select 2 as i ) table_i on i=i
which will simply join the table with a subquery containing two values and create a cartesian product. In effect, all rows will be doubled (or multiplied by how many values the subquery will have). You can vary the number of values depending on how many varsions of row You'll need. Depending on the value of i, Val will be Val1 or Val2 from original table. If you'll see the execution plan, there will be a warning that the join has no join predicates (because of i=i), but it is ok - we want it.
This makes queries a bit large (in terms of text) because of all the case when, but are quite easy to read if formatted right. I needed it for stupid tables like "BigID, smallID1, smallID2...smallID11" that was spread across many columns I don't know why.
Hope it helps.
Oh, I use a static table with 10000 numbers, so i just use
join tab10k on i<=10
for 10x row.
I apologize for stupid formatting, I'm new here.
I need to select data from two table using a join. This is fairly simple and have no problems here. The problem occurs when the field I am joining is used as two separate foreign keys (I didn't design this). So the ID field that I join on is either a positive or negative number.
If it's a positive number it relates to ID_1 on the table_2 table, if it's a negative, the number relates to ID_2 on the table_2 table. However the ID_2 will be a positive number (even though it's stored as a negative in the foreign key). Obviously there are no constraints to enforce these - so in essence not real foreign keys :/
The SQL I'm using goes something like this and is fine for the positive numbers:
select t1.Stuff, t2.MoreStuff from table_1 t1
join table_2 t2 on t1.ID_1 = t2.ID_1
where ...
How to incorporate the negative aspect of this into the join. Is this even possible? Ideally I'd like to alter the table to my needs but apparently this is not a valid option. I'm well and truly stuck.
The only other idea I've had is a separate sql statement to handle these odd ones. This is all being run by clr sql from C#. Adding a separate SqlCommand to the code will most likely slow things down hence why I'd prefer to keep it all in one command.
Your input is welcome, thanks :)
Let's say the tables look like this:
Table1 (id INT, foo INT, fk INT)
Table2 (id1 INT, id2 INT, bar VARCHAR(100))
...where fk can be used to look up a row in Table2 using id1 if positive and id2 if negative.
Then you can do the join as follows:
SELECT T1.id, T1.foo, T2.bar
FROM Table1 T1 INNER JOIN Table2 T2
ON (T1.fk > 0 AND T2.id1 = T1.fk)
OR (T1.fk < 0 AND T2.id2 = - T1.fk)
Simpliest way - join these tables using UNION ALL:
select t1.Stuff, t2.MoreStuff from table_1 t1
join table_2 t2 on t1.ID_1 = t2.ID_1
where t1._ID_1>0
UNION ALL
select t1.Stuff, t2.MoreStuff from table_1 t1
join table_2 t2 on abs(t1.ID_1) = t2.ID_2
where t1._ID_1<0
This won't be very performant...but then, nothing will. You need to transform your negative key into a positive one, and conditional logic for the join. Like this:
select t1.Stuff, t2.MoreStuff
from table_1 t1
join table_2 t2 on (t1.ID_1 > 0 AND t1.ID_1 = t2.ID_1)
OR (t1.ID_1 <0 AND ABS(t1.ID_1) = t2.ID_2)
where ...
No chance of using an index, because you're transforming t1.ID_1 (with the ABS function), but it's the best that you can do given the circumstances.
You can do something like this, but only after introducing the schema designer to a LART:
SELECT
t1.stuff, COALESCE(t2a.morestuff, t2b.morestuff)
FROM
table_1 t1
LEFT JOIN table_2 t2a ON (t1.id_1 > 0 AND t1.id_1 = t2a.id_1)
LEFT JOIN table_2 t2b ON (t1.id_1 < 0 AND t1.id_1 = -1 * t2b.id_2)
// etc
Alternatively,
SELECT
t1.stuff, t2.morestuff
FROM
table_1 t1
LEFT JOIN table_2 t2 ON (
(t1.id_1 > 0 AND t1.id_1 = t2.id_1)
OR (t1.id_1 < 0 AND t1.id_1 = -1 * t2.id_2)
)
// etc
Remember the LART, that's the most important part!
try this
DECLARE #Table TABLE(
ID INT,
ForeignKeyID INT
)
INSERT INTO #Table (ID,ForeignKeyID) SELECT 1, 1
INSERT INTO #Table (ID,ForeignKeyID) SELECT 2, 2
INSERT INTO #Table (ID,ForeignKeyID) SELECT 3, -1
INSERT INTO #Table (ID,ForeignKeyID) SELECT 4, -2
DECLARE #ForeignTable TABLE(
ID_1 INT,
ID_2 INT,
Val VARCHAR(MAX)
)
INSERT INTO #ForeignTable (ID_1,ID_2,Val) SELECT 1, 11, '1'
INSERT INTO #ForeignTable (ID_1,ID_2,Val) SELECT 2, 22, '2'
INSERT INTO #ForeignTable (ID_1,ID_2,Val) SELECT 3, 1, '3'
INSERT INTO #ForeignTable (ID_1,ID_2,Val) SELECT 3, 2, '4'
SELECT *
FROM #Table t INNER JOIN
#ForeignTable ft ON ABS(t.ForeignKeyID) =
CASE
WHEN t.ForeignKeyID > 0
THEN ft.ID_1
ELSE
ft.ID_2
END
It will have to be something like
select t1.Stuff, t2.MoreStuff from table_1 t1, table_2 t2 where (t1.ID_1 = t2.ID_1 OR t1.ID_1 = CONCAT("-",t2.ID_1)) where ...
Not sure if I have misunderstood your question.
By applying left joins across table two and using the absolute value function, you should be able to accomplish what you're looking for:
SELECT t1.Stuff, isnull(t2.MoreStuff, t2_2.MoreStuff)
FROM table_1 t1
LEFT JOIN table_2 t2 ON t1.ID_1 = t2.ID_1
AND t1.ID_1 > 0
LEFT JOIN table_2 t2_2 ON abs(t1.ID_2) = t2_2.ID_2
AND t1.ID_2 < 0
WHERE
...
The caveat here is that if ID_1 and ID_2 are not mutually exclusive you will get 2 query results.
select t1.Stuff, t2.MoreStuff from table_1 t1
join table_2 t2 on t1.ID_1 = t2.ID_1 or -t1.ID_1 = t2.ID_2
where ...