How can I count duplicate rows according to specific columns?

Martin AJ

I have a table like this:

// financial_supporter
+----+---------+--------+
| id | user_id | amount |
+----+---------+--------+
| 1  | 342     | 1000   |
| 2  | 234     | 6500   |
| 3  | 675     | 500    |
| 4  | 342     | 500    |
| 5  | 89      | 800    |
| 6  | 234     | 1500   |
| 7  | 342     | 1200   | 
+----+---------+--------+

I need to select all columns of table above plus one more column named "for_the_n_time". And it should be containing how many times the user has supported us.

So the expected result is:

// financial_supporter
+----+---------+--------+----------------+
| id | user_id | amount | for_the_n_time |
+----+---------+--------+----------------+
| 1  | 342     | 1000   | 3              | -- for the third time
| 2  | 234     | 6500   | 2              | -- for the second time
| 3  | 675     | 500    | 1              | -- for the first time
| 4  | 342     | 500    | 2              | -- for the second time
| 5  | 89      | 800    | 1              | -- for the first time
| 6  | 234     | 1500   | 1              | -- for the first time
| 7  | 342     | 1200   | 1              | -- for the first time
+----+---------+--------+----------------+

Here is my query which is incomplete. I guess I need a self-join, but I cannot implement totally.

SELECT fs.*, <I don't know> as for_the_n_time
FROM financial_supporter fs
INNER JOIN financial_supporter as fs2 ON <I don't know>
WHERE 1
ORDER BY id DESC

Any idea how can I do that?


Edited: Also how can I make it DESC order like this:

// financial_supporter
+----+---------+--------+----------------+
| id | user_id | amount | for_the_n_time |
+----+---------+--------+----------------+
| 7  | 342     | 1200   | 3              | -- for the third time
| 6  | 234     | 1500   | 2              | -- for the second time
| 5  | 89      | 800    | 1              | -- for the first time
| 4  | 342     | 500    | 2              | -- for the second time
| 3  | 675     | 500    | 1              | -- for the first time
| 2  | 234     | 6500   | 1              | -- for the first time
| 1  | 342     | 1000   | 1              | -- for the first time
+----+---------+--------+----------------+
Tim Biegeleisen

You may compute your generated column using a correlated subquery. I assume that the date of the record correlates with the id column, i.e. earlier contributions would have a lower id than later contributions.

SELECT *,
    (SELECT COUNT(*) FROM financial_supporter fs2
     WHERE fs1.user_id = fs2.user_id AND fs2.id <= fs1.id) for_the_n_time
FROM financial_supporter fs1
ORDER BY id DESC;

enter image description here

Demo

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related

In Pandas, how can I duplicate all rows with a specific value in a column, changing that columns value in the duplicate?

How to COUNT rows according to specific complicated rules?

How can I count all rows except duplicate entry of a column?

how can I count the amount of duplicate rows with this IF statement?

In SQL, how can I delete duplicate rows based on multiple columns?

How can I get rows with specific columns and specific pattern?

How can I drop duplicate rows from a Polars DataFrame according to a custom function?

How can I count rows up to a specific value in SQL

Python - How can i change specific columns to rows of a dataframe in python?

How can I count the visits according to the date?

After using table(A,B) in R, how can I order the columns and rows according to a particular ordering?

How can I remove any rows that have missing data within a specific range of columns and on only specific rows?

How can i select specific columns that start with a word according to a condition of another column in R using dplyr?

How to hide duplicate rows and display 4 count texts in 4 columns

How to duplicate specific rows?

How can we count agents according to specific variables?

How can I count duplicate occurrences

How can I count these duplicate integers?

pyspark - how can I remove all duplicate rows (ignoring certain columns) and not leaving any dupe pairs behind?

How can I drop rows from a DataFrame that have a duplicate strings across multiple columns?

How can I remove duplicate rows?

How can I duplicate different rows in Excel?

How can I fetch duplicate rows

How do I replicate rows a specific number of times according to a condition?

How can I use SQL to select duplicate rows of specific fields, allowing a time difference?

Sort dataframe rows according to values in specific columns

How can I remove a row which is a duplicate for certain columns, and keep the one row with a specific value/character of a non-duplicate column

Gnuplot, how can I count rows and columns of a matrix input of my gnuplot script?

how can I split a dataframe by two columns and count number of rows based on group more efficient