Hi everyone,
What typical/max database I/O throughputs are people getting for simple
selects of datasets cached in memory but not explicitly clustered?
100KB/sec? 1MB/sec? 10MB/sec? (assuming << 8KB rows).
Does a "select count(*) from tablename where ..." actually retrieve all
columns from all the selected rows or does it only count the rows?
That is to say:
would/should select count(*) be slower than select
count(averysmallcolumnmaybeboolean)
I'll test stuff out in practice, but it'll be good to know what it should
be in theory.
Thanks!
Link.