You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/routes/blog/post/announcing-bigint-columns/+page.markdoc
+132-7Lines changed: 132 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -11,11 +11,11 @@ featured: false
11
11
callToAction: true
12
12
---
13
13
14
-
A 32-bit signed integer caps out at around 2.1 billion. The moment your column needs to hold a larger value, the existing integer type stops being enough.
14
+
A 32-bit signed integer caps out at roughly **-2.1 billion** on the low end and **+2.1 billion** on the high end. The moment your column needs to hold a value outside that range, in either direction, the existing integer type stops being enough.
15
15
16
16
To close this gap, we’re introducing **BigInt columns** in Appwrite Databases.
17
17
18
-
BigInt is a new column type that stores 64-bit signed integers natively, with the same `min`, `max`, `default`, and atomic increment/decrement support you already use with regular integer columns. If your data fits in a 32-bit integer, you don’t need to change anything. If it doesn’t, BigInt is the column type you reach for.
18
+
BigInt is a new column type that stores 64-bit signed integers natively, with the same `min`, `max`, and `default` parameters you already use with regular integer columns. Previously, the only way to store 64-bit values was to set `size: 8` on an integer column, which was easy to miss and felt out of place on a type called integer. BigInt makes the intent explicit: if your data fits in a 32-bit integer, you don’t need to change anything; if it doesn’t, BigInt is the column type you reach for.
19
19
20
20
# What BigInt gives you
21
21
@@ -26,11 +26,136 @@ A BigInt column accepts any value in the 64-bit signed integer range, from **-9,
26
26
- **High-resolution timestamps**: nanosecond or microsecond epochs, monotonic clocks, sequence numbers from event logs.
27
27
- **Financial values in minor units**: balances, totals, or ledger entries stored as integers in the smallest currency unit, where you want to avoid floating-point rounding entirely.
28
28
29
-
# Atomic operations work out of the box
29
+
# Works with Operators
30
30
31
-
If you’ve been using [Atomic numeric operations](/docs/products/databases/atomic-numeric-operations) on integer or float columns, BigInt slots in with no changes. The `incrementRowColumn` and `decrementRowColumn` methods accept BigInt columns transparently, so you can safely increment a 64-bit counter under concurrency without ever fetching or rewriting the row.
31
+
BigInt columns plug straight into [Operators](/docs/products/databases/operators), so you can update a 64-bit value directly on the server without fetching the row first. `Operator.increment`, `Operator.decrement`, and the rest of the numeric operators treat BigInt the same way they treat integer and float columns: the change is applied atomically under concurrency control, respects any `min` or `max` bounds you set, and returns the new value.
32
32
33
-
This means a counter like `total_views` can sit at 12 billion, get hammered with concurrent `+1` requests across regions, and stay consistent. The server applies each delta atomically and respects whatever `min` or `max` bounds you set on the column, exactly as it does for 32-bit integers.
33
+
This means a counter like `total_views` can sit at 12 billion, get hammered with concurrent `+1` requests across regions, and stay consistent without any client-side coordination.
34
+
35
+
{% multicode %}
36
+
37
+
```server-nodejs
38
+
const result = await tablesDB.updateRow({
39
+
databaseId: '<DATABASE_ID>',
40
+
tableId: '<TABLE_ID>',
41
+
rowId: '<ROW_ID>',
42
+
data: {
43
+
total_views: sdk.Operator.increment(1)
44
+
}
45
+
});
46
+
```
47
+
48
+
```server-python
49
+
result = tables_db.update_row(
50
+
database_id = '<DATABASE_ID>',
51
+
table_id = '<TABLE_ID>',
52
+
row_id = '<ROW_ID>',
53
+
data = { 'total_views': Operator.increment(1) }
54
+
)
55
+
```
56
+
57
+
```server-php
58
+
$result = $tablesDB->updateRow(
59
+
databaseId: '<DATABASE_ID>',
60
+
tableId: '<TABLE_ID>',
61
+
rowId: '<ROW_ID>',
62
+
data: [ 'total_views' => Operator::increment(1) ]
63
+
);
64
+
```
65
+
66
+
```server-ruby
67
+
result = tables_db.update_row(
68
+
database_id: '<DATABASE_ID>',
69
+
table_id: '<TABLE_ID>',
70
+
row_id: '<ROW_ID>',
71
+
data: { 'total_views' => Operator.increment(1) }
72
+
)
73
+
```
74
+
75
+
```server-dart
76
+
await tablesDB.updateRow(
77
+
databaseId: '<DATABASE_ID>',
78
+
tableId: '<TABLE_ID>',
79
+
rowId: '<ROW_ID>',
80
+
data: {
81
+
'total_views': Operator.increment(1)
82
+
},
83
+
);
84
+
```
85
+
86
+
```server-kotlin
87
+
val response = tablesDB.updateRow(
88
+
databaseId = "<DATABASE_ID>",
89
+
tableId = "<TABLE_ID>",
90
+
rowId = "<ROW_ID>",
91
+
data = mapOf("total_views" to Operator.increment(1))
92
+
)
93
+
```
94
+
95
+
```server-java
96
+
tablesDB.updateRow(
97
+
"<DATABASE_ID>",
98
+
"<TABLE_ID>",
99
+
"<ROW_ID>",
100
+
Map.of("total_views", Operator.increment(1)),
101
+
new CoroutineCallback<>((result, error) -> {
102
+
if (error != null) {
103
+
error.printStackTrace();
104
+
return;
105
+
}
106
+
System.out.println(result);
107
+
})
108
+
);
109
+
```
110
+
111
+
```server-swift
112
+
_ = try await tablesDB.updateRow(
113
+
databaseId: "<DATABASE_ID>",
114
+
tableId: "<TABLE_ID>",
115
+
rowId: "<ROW_ID>",
116
+
data: [
117
+
"total_views": Operator.increment(1)
118
+
]
119
+
)
120
+
```
121
+
122
+
```server-dotnet
123
+
await tablesDB.UpdateRow(
124
+
databaseId: "<DATABASE_ID>",
125
+
tableId: "<TABLE_ID>",
126
+
rowId: "<ROW_ID>",
127
+
data: new Dictionary<string, object>
128
+
{
129
+
{ "total_views", Operator.Increment(1) }
130
+
}
131
+
);
132
+
```
133
+
134
+
```server-go
135
+
_, err := tablesDB.UpdateRow(
136
+
"<DATABASE_ID>",
137
+
"<TABLE_ID>",
138
+
"<ROW_ID>",
139
+
tablesDB.WithUpdateRowData(map[string]any{
140
+
"total_views": operator.Increment(1),
141
+
}),
142
+
)
143
+
```
144
+
145
+
```server-rust
146
+
tables_db.update_row(
147
+
"<DATABASE_ID>",
148
+
"<TABLE_ID>",
149
+
"<ROW_ID>",
150
+
Some(json!({
151
+
"total_views": operator::increment_by(1)
152
+
})),
153
+
None,
154
+
None,
155
+
).await?;
156
+
```
157
+
158
+
{% /multicode %}
34
159
35
160
# Creating a BigInt column
36
161
@@ -339,10 +464,10 @@ There is no automatic migration between integer and BigInt, so picking the right
339
464
340
465
BigInt columns are available on **Appwrite Cloud** today.
341
466
342
-
You can create your first BigInt column directly from the Console, or roll it out programmatically through any of the Server SDKs above. The new column type works with every TablesDB feature you already use, including atomic numeric operations, indexes, and full schema creation.
467
+
You can create your first BigInt column directly from the Console, or roll it out programmatically through any of the Server SDKs above. The new column type works with every TablesDB feature you already use, including Operators, indexes, and full schema creation.
343
468
344
469
# More resources
345
470
346
471
- [Read the Databases documentation](/docs/products/databases/tables#columns)
Appwrite Databases now supports **BigInt** columns, giving you a 64-bit signed integer type alongside the existing 32-bit `integer`. Use BigInt when your values may exceed the ±2.1 billion range of a regular integer, for example large counters, high-resolution timestamps, or external IDs from systems that use 64-bit keys.
9
9
10
-
BigInt columns accept optional `min`, `max`, and `default` parameters and work with atomic increment/decrement operations, just like regular integers.
10
+
BigInt columns accept optional `min`, `max`, and `default` parameters and work with [Operators](/docs/products/databases/operators) for atomic server-side updates, just like regular integers.
0 commit comments