This summarises the use of my shiny apps (at https://shiny.psyctc.org/). The analyses will evolve a bit through 2024 as, I hope, the level of use increases.

Current data

Info

Value

First date in data

2024-02-07

Last date in data

2025-01-21

This analysis time/date

03:13 on 21/01/2025

Number of days spanned

349

Total number of sessions

3313

Mean sessions per day

9.49

I am not using any way to separate different users and session is per app, so if someone used multiple apps during one visit to the server, each app used is counts as a separate session.

App uses per day

Here’s the plot of uses per day.

That shows one large burst of use after the apps were publicised through the Systemic Research Centre Email list (5.iii.24) and a smaller one after a posting to the IDANET list (9.iii.24). There are later bursts that I can’t directly ascribe to any publicity.

Sessions per week

More sensibly, here is the plot by week, actually plotting the sessions per day and counting from the launch from the launch on 7.ii.24. Where the last week is still an incomplete week that has been taken into account in the calculations. 95% CIs are Poisson model estimates.

Breaking that down by app gives me this.

And facetting by app gives this.

Sessions per Month

The first month was incomplete and the last month will usually be incomplete, that is taken into consideration in computing these session per day rates.

Numbers of sessions per app

Here’s the number of times each app has been used during that period.

App

Sessions

First used

Days available

Sessions per day

Days used

% days used

RCI1

1,364

2024-02-07

349

3.908

260

74%

CSC1

543

2024-02-07

349

1.556

201

58%

COREpapers1

206

2024-05-11

255

0.808

92

36%

Cronbach1Feldt

150

2024-02-07

349

0.430

96

28%

RCI2

149

2024-02-07

349

0.427

76

22%

CORE-OM_scoring

120

2024-04-16

280

0.429

73

26%

Gaussian1

100

2024-03-05

322

0.311

73

23%

CIcorrelation

99

2024-02-07

349

0.284

59

17%

ECDFplot

83

2024-02-07

349

0.238

31

9%

CSClookup2a

52

2024-02-07

349

0.149

29

8%

CImean

50

2024-02-07

349

0.143

41

12%

CIproportion

43

2024-02-07

349

0.123

35

10%

Spearman-Brown

42

2024-05-03

263

0.160

26

10%

Histogram_and_summary1

40

2024-03-25

302

0.132

21

7%

plotCIPearson

32

2024-02-07

349

0.092

25

7%

CISpearman

29

2024-02-07

349

0.083

25

7%

CIdiff2proportions

24

2024-02-07

349

0.069

15

4%

Create_univariate_data

23

2024-04-09

287

0.080

22

8%

Bonferroni1

21

2024-03-24

303

0.069

16

5%

g_from_d_and_n

21

2024-02-07

349

0.060

19

5%

CISD

20

2024-02-07

349

0.057

19

5%

Screening1

20

2024-02-07

349

0.057

15

4%

Attenuation2

19

2024-10-11

102

0.186

16

16%

Attenuation

17

2024-10-09

104

0.163

13

12%

Feldt2

17

2024-11-27

55

0.309

14

25%

random1

16

2024-11-19

63

0.254

14

22%

getCorrectedR

13

2024-10-13

100

0.130

9

9%

The columns of Sessions per day and of Percentage days used are rather misleading as different apps have been available for very different numbers of days. I won’t be able to get a less misleading forest plot of the mean usage per day per app until there has been far more usage than we have had so far so I will maybe add that later in the year.

However, I can get confidence intervals for proportions on what usage we already have so here’s a less misleading forest plot of proportion of the available days on which each app was used. The dotted reference line marks the overall usage as a proportion of days available across all the apps.

Here’s a map of usage per app against dates. The sizes of the points show how many times the app was used on that day. The y axis sorts by first date used and then by descending total number of times used.

That shows that many of the apps were first used on the same day (7.ii.2024) which was the day I set up this logging. I tested all the then existing apps that day so all appear on that day.

Breakdown by day of the week

Weekday

n

percent

Mon

5,024

18%

Tue

5,258

19%

Wed

5,017

18%

Thu

4,148

15%

Fri

4,078

14%

Sat

1,941

7%

Sun

2,771

10%

Same sorted!

Weekday

n

percent

Tue

5,258

19%

Mon

5,024

18%

Wed

5,017

18%

Thu

4,148

15%

Fri

4,078

14%

Sun

2,771

10%

Sat

1,941

7%

Time of day

I’ve broken this down by hour. The server is to some extent protected behind a proxy at my ISP which is good for forcing https access but it does mean that I don’t know where people come from so this is all UMT (i.e. old “GMT”: internet time). I think it also suggests, assuming that most accesses are during working hours, that most visitors/users are coming to the site from Europe or the Americas.

Hour

n

percent

0

27

1%

1

27

1%

2

25

1%

3

27

1%

4

95

3%

5

168

5%

6

218

7%

7

172

5%

8

259

8%

9

212

6%

10

175

5%

11

170

5%

12

209

6%

13

192

6%

14

247

7%

15

183

6%

16

145

4%

17

175

5%

18

122

4%

19

104

3%

20

139

4%

21

116

4%

22

63

2%

23

43

1%

Same sorted.

Hour

n

percent

8

259

8%

14

247

7%

6

218

7%

9

212

6%

12

209

6%

13

192

6%

15

183

6%

10

175

5%

17

175

5%

7

172

5%

11

170

5%

5

168

5%

16

145

4%

20

139

4%

18

122

4%

21

116

4%

19

104

3%

4

95

3%

22

63

2%

23

43

1%

0

27

1%

1

27

1%

3

27

1%

2

25

1%

Browsers

For what little it’s worth, here are the browser IDs picked up by shiny (in descending order of frequency).

The value of “ahrefs.com/robot/” is my translation of accesses that identify their browser as: “Netscape 5.0 (compatible; AhrefsBot/7.0; +http://ahrefs.com/robot/) -?”.

For reasons I don’t understand, my open source shiny does not seem to detect Microsoft Edge. I have used the apps with Edge (ugh) and it didn’t show up here. If you know why, or even how to detect Edge, do tell me (https://www.psyctc.org/psyctc/contact-me/)!

Browser

n

Chrome

2,115

Firefox

925

Safari

231

Other

22

Opera

6

The “Other” there refers to visits from browsers not identifying as one of Chrome, Firefox, Opera or Safari. These are usually or always crawlers, the breakdown of them was as follows.

Browser2

n

http://ahrefs.com/robot/

19

https://developers.facebook.com/docs/sharing/webmasters/crawler

3

I am a little bit interested in when these crawlers come and go so …

Browser2

firstSeen

lastSeen

http://ahrefs.com/robot/

2024-11-30

2025-01-16

https://developers.facebook.com/docs/sharing/webmasters/crawler

2025-01-08

2025-01-09

This shows the map against time, size shows number per day.

For what it’s worth, here are the numbers per day.

Other browser

date

nPerDay

http://ahrefs.com/robot/

2024-11-30

1

2024-12-01

1

2024-12-13

2

2024-12-14

5

2024-12-18

1

2024-12-24

1

2024-12-31

1

2025-01-01

2

2025-01-05

1

2025-01-11

1

2025-01-15

2

2025-01-16

1

https://developers.facebook.com/docs/sharing/webmasters/crawler

2025-01-08

1

2025-01-09

2

Browser versions

I can’t think it matters but here is the breakdown with the version numbers as well as the browser name.

Browser

n

Chrome 131

383

Chrome 130

312

Chrome 129

266

Chrome 128

194

Chrome 125

161

Chrome 126

145

Firefox 125

132

Chrome 127

125

Firefox 131

120

Safari 17

118

Chrome 124

111

Firefox 133

102

Firefox 130

89

Firefox 132

85

Chrome 122

76

Firefox 129

74

Firefox 128

70

Firefox 124

69

Chrome 101

67

Chrome 123

67

Safari 18

47

Firefox 123

44

Firefox 126

44

Chrome 86

38

Safari 16

36

Firefox 122

34

Firefox 127

34

Chrome 103

28

Chrome 100

27

Chrome 121

26

Chrome 104

21

Chrome 102

17

Chrome 119

16

Safari 604

16

Firefox 134

11

Safari 15

11

Firefox 119

9

Chrome 120

6

Firefox 115

6

Chrome 112

5

Chrome 106

4

Chrome 109

3

Safari 14

3

Chrome 107

2

Chrome 114

2

Chrome 116

2

Chrome 117

2

Chrome 132

2

Chrome 79

2

Firefox 102

2

Opera 109

2

Opera 113

2

Chrome 110

1

Chrome 111

1

Chrome 115

1

Chrome 4

1

Chrome 94

1

Opera 114

1

Opera 115

1

Durations of sessions

A bit more interesting is the durations of the sessions.
Some sessions don’t have a recorded termination time, currently that’s true for 817, i.e. 24.7% of the sessions. This could include occasional session still active at the time at which the copy of the database was pulled. However, I think most will be where someone leaves the session open. I have capped the sessions at one hour in the analyses below.

Here are the descriptive statistics.

name

nNA

nOK

min

lqrt

mean

uqrt

max

durMinsAll

817

2,496

0.0

1.0

76.0

39.0

2,861.0

durMinsCapped

817

2,496

0.0

1.0

19.9

39.0

60.0

durMinsCensored

1,333

1,980

0.0

1.0

9.4

16.0

60.0

durMinsAll includes all the sessions so far, durMinsCapped treats all sessions recorded as lasting 60 minutes as such, more realistically, durMinsCensored ignores those sessions assuming that they were abandoned sessions. (This shows a maximum duration of 60 minutes as session durations were measured to a fraction of a second so any duration of over 59’30" and less than 60’0" is rounded up to 60 minutes and counted as a genuine 60 minutes!).

Most of the sessions, as you would expect given the nature of the apps, are sessions lasting only a few minutes. If I use the censoring and ignore all the sessions that lasted more than an hour on the plausible assumption that they were abandoned sessions rather than someone continuing to try different parameters for any app for more than an hour then there have been 1980 such sessions so far. Of these 40 lasted under a minute. I guess it’s possible to launch an app and get useful output if only wanting the default parameters in under a minute but I think it would be rare so I think we can regard these as “just looking” sessions and they represent 2% of the 1980 uncensored sessions.

The number of sessions lasting a minute (rounding to the nearest minute) was 912, i.e. 46.1% of the uncensored sessions. I think these probably represent very quick but perhaps genuine uses of an app.

That leaves 1028 sessions lasting longer than a minute but less than an hour i.e. 51.9% of the uncensored sessions, I think these can be regarded as sessions in which someone entered parameters and perhaps played around with different parameters and perhaps noted or pulled down outputs.

For now (August 2024) I see those as pretty sensible breakdown proportions. I guess that as time goes by it may be interesting to break things down by months and by apps but for now the numbers don’t really merit that and the effects of different apps being added at different times mean that the two variables of app and month are structurally entwined.

Values input

Where it might be useful to me to know more about the usage I am logging input values for some apps. Here’s the breakdown of the numbers of sessions in which inputs were recorded.

app_name

n

percent

COREpapers1

6,562

38.4%

RCI1

5,479

32.1%

CSC1

2,712

15.9%

RCI2

489

2.9%

CImean

445

2.6%

CORE-OM_scoring

225

1.3%

Cronbach1Feldt

168

1.0%

CSClookup2a

156

0.9%

random1

146

0.9%

ECDFplot

139

0.8%

Spearman-Brown

136

0.8%

Histogram_and_summary1

135

0.8%

Create_univariate_data

101

0.6%

CIcorrelation

72

0.4%

CISpearman

43

0.3%

Gaussian1

27

0.2%

CIproportion

18

0.1%

Feldt2

12

0.1%

Screening1

7

0.0%

CISD

6

0.0%

plotCIPearson

4

0.0%

g_from_d_and_n

3

0.0%

Attenuation2

2

0.0%

CIdiff2proportions

2

0.0%

And here are the variables by app, nVisits is the total number of sessions with recorded inputs for that app, nVars is the number of variables that have been input for that app. Finally, nVals is the number of distinct values that have been input for that variable.

app_name

id

nVisits

nVars

nVals

COREpapers1

authName

6,562

58

102

clipbtn

6,562

58

1

date1

6,562

58

45

date2

6,562

58

31

embedded

6,562

58

17

filterAssStructure

6,562

58

18

filterCORElanguages

6,562

58

15

filterCOREmeasures

6,562

58

26

filterFormats

6,562

58

17

filterGenderCats

6,562

58

9

mainPlotDownload-filename

6,562

58

3

mainPlotDownload-format

6,562

58

1

or

6,562

58

6

or2

6,562

58

3

or3

6,562

58

4

or4

6,562

58

3

or5

6,562

58

4

otherMeasure

6,562

58

30

otherMeasures_cell_clicked

6,562

58

19

otherMeasures_cells_selected

6,562

58

12

otherMeasures_columns_selected

6,562

58

12

otherMeasures_row_last_clicked

6,562

58

5

otherMeasures_rows_all

6,562

58

61

otherMeasures_rows_current

6,562

58

60

otherMeasures_rows_selected

6,562

58

22

otherMeasures_search

6,562

58

29

otherMeasures_state

6,562

58

64

paperLang

6,562

58

23

papers2_cell_clicked

6,562

58

41

papers2_cells_selected

6,562

58

16

papers2_columns_selected

6,562

58

16

papers2_row_last_clicked

6,562

58

8

papers2_rows_all

6,562

58

96

papers2_rows_current

6,562

58

96

papers2_rows_selected

6,562

58

38

papers2_search

6,562

58

45

papers2_state

6,562

58

103

papers_cell_clicked

6,562

58

237

papers_cells_selected

6,562

58

198

papers_columns_selected

6,562

58

198

papers_row_last_clicked

6,562

58

30

papers_rows_all

6,562

58

1,277

papers_rows_current

6,562

58

1,305

papers_rows_selected

6,562

58

282

papers_search

6,562

58

249

papers_state

6,562

58

1,326

reqEmpCOREdata

6,562

58

37

reqOA

6,562

58

12

reqOpenData

6,562

58

13

reset_input

6,562

58

9

shinyjs-resettable-side-panel

6,562

58

7

tabSelected

6,562

58

117

therOrGen

6,562

58

49

vecAssStructure

6,562

58

29

vecCORElanguages

6,562

58

7

vecFormats

6,562

58

19

vecGenderCats

6,562

58

8

vecWhichCOREused

6,562

58

52

RCI1

SD

5,479

8

2,180

ci

5,479

8

225

compute

5,479

8

1,374

dp

5,479

8

76

generate

5,479

8

5

max

5,479

8

2

min

5,479

8

1

rel

5,479

8

1,616

CSC1

SDHS

2,712

7

496

SDNHS

2,712

7

511

dp

2,712

7

31

maxPoss

2,712

7

371

meanHS

2,712

7

542

meanNHS

2,712

7

631

minPoss

2,712

7

130

RCI2

SD

489

6

115

ci

489

6

23

compute

489

6

129

dp

489

6

8

n

489

6

94

rel

489

6

120

CImean

SD

445

5

196

SE

445

5

1

dp

445

5

2

mean

445

5

192

n

445

5

54

CORE-OM_scoring

compData_cell_clicked

225

27

5

compData_cells_selected

225

27

5

compData_columns_selected

225

27

5

compData_rows_all

225

27

6

compData_rows_current

225

27

12

compData_rows_selected

225

27

5

compData_search

225

27

5

compData_state

225

27

18

contents_cell_clicked

225

27

2

contents_cells_selected

225

27

2

contents_columns_selected

225

27

2

contents_rows_all

225

27

4

contents_rows_current

225

27

4

contents_rows_selected

225

27

2

contents_search

225

27

2

contents_state

225

27

4

dp

225

27

15

file1

225

27

17

summary_cell_clicked

225

27

1

summary_cells_selected

225

27

1

summary_columns_selected

225

27

1

summary_rows_all

225

27

1

summary_rows_current

225

27

1

summary_rows_selected

225

27

1

summary_search

225

27

1

summary_state

225

27

1

tabSelected

225

27

102

Cronbach1Feldt

alpha

168

5

91

ci

168

5

2

dp

168

5

4

k

168

5

38

n

168

5

33

CSClookup2a

Age

156

5

2

Gender

156

5

4

Lookup

156

5

21

Scoring

156

5

17

YPscore

156

5

112

random1

compute

146

11

12

dataTable_cell_clicked

146

11

11

dataTable_cells_selected

146

11

11

dataTable_columns_selected

146

11

11

dataTable_rows_all

146

11

24

dataTable_rows_current

146

11

24

dataTable_rows_selected

146

11

11

dataTable_search

146

11

11

dataTable_state

146

11

24

valN

146

11

6

valSeed

146

11

1

ECDFplot

annotationSize

139

32

6

fileHeight

139

32

6

fileHeightQuantiles

139

32

2

fileWidth

139

32

6

fileWidthQuantiles

139

32

2

inputType

139

32

10

pastedData

139

32

5

summary_cell_clicked

139

32

2

summary_cells_selected

139

32

2

summary_columns_selected

139

32

2

summary_rows_all

139

32

4

summary_rows_current

139

32

4

summary_rows_selected

139

32

2

summary_search

139

32

2

summary_state

139

32

4

tabSelected

139

32

26

textSize

139

32

6

textSizeQuantiles

139

32

2

tibQuantiles_cell_clicked

139

32

2

tibQuantiles_cells_selected

139

32

2

tibQuantiles_columns_selected

139

32

2

tibQuantiles_rows_all

139

32

4

tibQuantiles_rows_current

139

32

4

tibQuantiles_rows_selected

139

32

2

tibQuantiles_search

139

32

2

tibQuantiles_state

139

32

4

title

139

32

6

titleQuantiles

139

32

2

xLab

139

32

6

xLabQuantiles

139

32

2

yLab

139

32

6

yLabQuantiles

139

32

2

Spearman-Brown

currK

136

13

9

currRel

136

13

11

maxK

136

13

6

plotDownload-filename

136

13

1

reliabilities_cell_clicked

136

13

5

reliabilities_cells_selected

136

13

5

reliabilities_columns_selected

136

13

5

reliabilities_rows_all

136

13

24

reliabilities_rows_current

136

13

26

reliabilities_rows_selected

136

13

5

reliabilities_search

136

13

5

reliabilities_state

136

13

26

step

136

13

8

Histogram_and_summary1

bins

135

24

7

contents_cell_clicked

135

24

4

contents_cells_selected

135

24

4

contents_columns_selected

135

24

4

contents_rows_all

135

24

8

contents_rows_current

135

24

8

contents_rows_selected

135

24

4

contents_search

135

24

4

contents_state

135

24

8

dataType

135

24

6

file1

135

24

8

plotDownload-format

135

24

1

summary_cell_clicked

135

24

3

summary_cells_selected

135

24

3

summary_columns_selected

135

24

3

summary_rows_all

135

24

6

summary_rows_current

135

24

6

summary_rows_selected

135

24

3

summary_search

135

24

3

summary_state

135

24

6

title

135

24

8

var

135

24

9

xLab

135

24

9

yLab

135

24

10

Create_univariate_data

charSeparator

101

11

17

dataTable_cell_clicked

101

11

6

dataTable_cells_selected

101

11

6

dataTable_columns_selected

101

11

6

dataTable_rows_all

101

11

12

dataTable_rows_current

101

11

12

dataTable_rows_selected

101

11

6

dataTable_search

101

11

6

dataTable_state

101

11

12

dist

101

11

2

generate

101

11

16

CIcorrelation

R

72

4

37

ci

72

4

3

dp

72

4

2

n

72

4

30

CISpearman

Gaussian

43

6

2

ci

43

6

2

dp

43

6

3

method

43

6

6

n

43

6

16

rs

43

6

14

Gaussian1

dp

27

4

2

mean

27

4

20

n

27

4

4

nBins

27

4

1

CIproportion

ci

18

4

1

dp

18

4

2

n

18

4

6

x

18

4

9

Feldt2

alpha1

12

5

1

alpha2

12

5

3

dp

12

5

3

n1

12

5

3

n2

12

5

2

Screening1

prev

7

2

5

spec

7

2

2

CISD

SD

6

4

1

SDorVar

6

4

3

ci

6

4

1

n

6

4

1

plotCIPearson

R

4

1

4

g_from_d_and_n

d

3

2

2

n

3

2

1

Attenuation2

unattR

2

1

2

CIdiff2proportions

n1

2

1

2

So far nVars is a fixed number for each app as it’s going to be maximum number of input values the app requests from the user. Some apps, e.g. RCI1, have a variable “compute” that is just the button instructing the app to run which wasn’t present in early iterations of the app. Another change is that as I get more savvy about shiny some apps, perhaps existing ones, may develop a step-by-step interface so that the numbers of variables input for each use of the app may differ a bit depending on what the user has chosen to do.

Inputs for the RCI1 app

It becomes a bit messy to analyse the inputs as it has to be done (as far as I can currently see) individually by app. It was quite useful as I could see that it had, at least at some point, been possible to enter impossible zero values for reliability and SD. I have now filtered those values out.

Here’s a breakdown for RCI1. These counts only include values that the user entered manually so if the user just left the value at the default value that isn’t counted (however, if the user changes it and then back to the default value, that entry of the default value is counted). I guess I could fix that by filling in the default value where a variable doesn’t appear in the inputs for the session. I’m not sure that’s sufficiently interesting to be worth the faff.

I guess that the .7 entry for the CI was probably me checking the app worked even for that value but I can’t remember for sure. Otherwise it seems entirely sensible that the only other non-default value was .9. The spread of the reliability values is more interesting and looks sensible to me, similarly for the SD.

I guess I could make the app a more interesting information gathering tools if I invited users to input the scale/score being used (i.e. “CORE-OM total”, “BDI-II total”) and even perhaps also ask about dataset (e.g. “my last six months baseline values”, or “the Sheffield X study”) but I think the amount of post-processing that would be necessary to get anything even halfway clean out of that seems unlikely to make this worth the programming/cleaning hassle.

Version history