In R `nnls` (nonnegative least squares) does not guarantee integrality but in this case does give one solution and it happens to be integral: `library(nnls); nnls(A, b)`
The poorman package (on CRAN) implements a lot of tidyverse with no dependencies. Other packages that provide alternate implementations are datawizard and tidytable.
David P. Ellerman has a mathematical approach to accounting based on what he refers to as the Pacioli group. A provisional element of the Pacioli group looks like x//y where x and y are non-negative integers and we form equivalence classes based on x//y and u//v being equivalent if the cross sums x+v and y+u are equal. The group operation is x//y + u//v = (x+u)//(y+v) and the inverse of x//y is y//x . The identity element is 0//0. For more info see, for example, https://ellerman.org/wp-content/uploads/2012/12/DEB-Math-Mag...
The HN news title seems inaccurate. This is not necessarily a drop in demand nor is it applicable to all shipping as also suggested by the title.
The article suggests it is an oversupply of container ships with very few ships scrapped as opposed to demand: "The industry invested heavily in new container ships during and after the pandemic to meet strong demand and benefit from record freight rates. A large number of new ships entered the market since the summer with no signs of idling or scrapping, said Clerc."
Furthermore just because container ships were over ordered does not mean that other types of ships were so it is misleading to just refer to shipping in the title. It should be "container shipping".
What needs to be added is that before R the reproducibility problem in science was compounded by the fact that analyses were done with proprietary software limiting communication and replication of those analyses. This was and continues to be a major problem, particular in some fields, but at least now there is a common widely used language that can be used to overcome this. I wouldn't focus on idiosyncrasies but rather on the major problem it addresses. Any large system will grow over time and have some inconsistencies but after a while you learn the workarounds so they are less important than the big picture.
On the contrary, R packaging system is too broken for R to be reliably reproducible. No one specifies package versions or R versions. Base R has no way to install a specific version of a package. There’s a package that lets you do that, but well, you might need a specific version of it. Particularly if you need to run an old version of R for reproducing an old script it may be impossible to use any standard tool to install the correct packages thanks to this problem - the version of devtools that install.packages gets won’t be compatible with your old R but you need that package to request another version. Instead everyone just ignores it and hopes package versions don’t matter.
I don't see how R specifically addresses the reproducibility problem, It's been around for almost 30 years and before its recent rise in popularity, lots of science was done in C, perl, fortran etc. Not to mention that actual dependency versioning is pretty poor. I struggle to run other people's R code after about 6 months (especially if they used the tidyverse as it pulls in hundreds of unstable dependencies) and nobody records what package versions are used and functions are seemingly deprecated every week.
1. Before R commercial statistical packages were mainly used. You can, in principle, just use assembler too and develop everything yourself but it isn't practical. Regarding C/C++ and Fortran, many R packages are, in fact, wrappers around code in those or other languages making it easier to access them. From that point of view R can be regarded as a glue language. 2. Regarding keeping versions straight, all past versions of packages in the CRAN repository are kept on CRAN. Microsoft MRAN repository also maintains histories of packages that can be accessed via the checkpoint package which will install packages as they existed on a given date. Furthermore, install_version in the remotes and devtools packages can install specific versions. 3. Regarding tidyverse dependencies you can reduce the number of packages you load by not using library(tidyverse) and instead load the specific packages you need. This will result in fewer packages being loaded.
> Before R commercial statistical packages were mainly used.
Maybe in your field, I work in bioinformatics - before R, perl was widely used as a high-level language.
> Regarding keeping versions straight, all past versions of packages in the CRAN repository are kept on CRAN...
This is woefully inadequate if you need to replicate somebody else's environment. Nobody should think manually guessing and then typing in each package version and hoping they're compatible is a viable option. Not to mention even if you specify an older version of a package it doesn't pull in compatible dependencies, it just pulls in the latest version. There's renv but it's not reached widespread use.
> Regarding tidyverse dependencies you can reduce the number of packages you load by not using library(tidyverse) and instead load the specific packages you need. This will result in fewer packages being loaded
We're talking about replicating other people's work. We don't have any control over their code, and R users are largely ignorant of best-software practices.
Totally agree. I find it frustrating trying to reproduce other people's work in R. How has this situation has been allowed to continue for so long? It's unacceptable, especially when used for science. It's impossible to replicate anything unless you are lucky enough you manage to find which package version introduces breaking changes and even then this is something you have to do repeatedly for every code break. Even with _renv_ it's a library you have to install within your R environment which is pointless. Where is a dependency solver like conda for R? - Not that it's perfect, but I've been happy with its drop-in replacement - mamba recently.
The packages that were used in statistics were SAS, SPSS and Stata. perl is not a statistical package and has nowhere near the depth of statistical capabilities of R.
Don't forget that I also mentioned the checkpoint package in my post. You only need to know the date for that, not the version of each of the packages.
In your last paragraph I think you are referring more to software development practices than what is available through R. Simply using R or any language doesn't guarantee this.
That's a very roundabout way to solve an actual problem. In many cases you don't pin your package version to _latest_ (whatever that date is) and you need a more fine-grained solution to keeping package versions. I don't think that solves this and I don't know if you can do it with checkpoint.
If we look at the secondary endpoints note that 3x as many died in the control group, 2.5x as many needed mechanical ventilation and one third more need to go to the ICU.
Ivermectin Control
n 247 249
Mech vent 4 10
ICU 6 8
Died 3 10
These may not have been primary endpoints and may not have been statistically significant but it does raise the question of whether they would have been significant had a larger sample size been used.
Once you start dealing with small numbers (e.g. 2% versus 3%) then you would need far, far more patients to reach statistical significance.
It's tempting to look at things like 8 people visiting the ICU in one group but only 6 people in the other group and see that 6 < 8, but the problem is that it's too small of a sample size to decide if it's significant. The article covers that:
> There were no significant differences between ivermectin and control groups for all the prespecified secondary outcomes
The only one that almost comes close is death rate:
> The 28-day in-hospital mortality rate was similar for the ivermectin and control groups (3 [1.2%] vs 10 [4.0%]; RR, 0.31; 95% CI, 0.09 to 1.11; P = .09)
If this was the only Ivermectin study out there, it would be worth following up on. But it's not, and when this is added to the rest of the (not-retracted) studies it doesn't really change the picture.
At this point it matters less and less anyway. Countries that already tried Ivermectin at scale are starting to abandon the approach. Legitimately effective COVID drugs like Paxlovid with highly significant differences are becoming readily available. It's time to stop grasping at straws and accept that it doesn't work.
You are right that the study simply isn't powered to detect a decrease in mortality, even if it is there! That said, if true, a 70% reduction in mortality would still be of significant benefit.
You are right that this study doesn't change the picture. It is is just another underpowered study showing a large but statistically insignificant reduction in mortality. Yes, Paxlovid is almost certainly better.
That said, I would like to understand the efficacy of ivermectin with an appropriately powered and designed study. I hope ACTIV-6 reports out this year and used a reasonable treatment dose and timing comparable to Paxlovid.
Not being statistically significant is not a proof that it doesn't work -- it only means they could not reject the possibility that the results were due to chance. The possibility that it slashed the death rate by 3x (which is what happened in the study) when projected to the world wide deaths of ~ 4.5 million would imply saving the lives of 3 million people so it certainly would be worthwhile to check it out. Maybe it was due to chance but maybe it was not.
> The possibility that it slashed the death rate by 3x (which is what happened in the study) when projected to the world wide deaths of ~ 4.5 million would imply saving the lives of 3 million people so it certainly would be worthwhile to check it out. Maybe it was due to chance but maybe it was not.
When you talk about 3x, you're talking about 10 vs. 3. Extrapolating that out to millions of people is not a great idea.
Let's say you run an ice cream company. You round up 490 friends and ask them to pick their favorite flavor of ice cream: chocolate or vanilla. 477 say they don't eat ice cream, 10 pick chocolate and 3 pick vanilla. You rework your ice cream production to be 3x chocolate : 1x vanilla based on your survey and promptly go out of business.
That's what's going on here as well, there's just not enough statistical significance between the two outcomes to infer any reduction in severe COVID cases.
> Findings: In this open-label randomized clinical trial of high-risk patients with COVID-19 in Malaysia, a 5-day course of oral ivermectin administered during the first week of illness did not reduce the risk of developing severe disease compared with standard of care alone.
Also they say they used the Fisher exact test and got a p value for mortality of 0.09 so it seems they were doing a two-sided test which is the default for fisher.test in R.
ivm <- c(3, 247-3)
con <- c(10, 249-10)
m <- rbind(ivm, con)
fisher.test(m)$p.value
## [1] 0.08809225
However, I think a one sided test could be justified and in that case it is significant at the 5% level.
fisher.test(m, alternative = "less")$p.value
## ivm
## 0.04541928
Furthermore a test just twice as large would be sufficient to determine significance at the 1% level even with a two sided test if the same death rate continued to hold.
ivm <- c(3, 247-3)
con <- c(10, 249-10)
m <- rbind(ivm, con)
fisher.test(2*m)$p.value # 2* so that it is twice as large
## [1] 0.008490957
So on the one hand I completely agree with you on the necessity of having enough people to dodge the problem of random chance. "the law of large numbers" on Wikipedia is good.
On the other hand you have three different categories where the numbers from one group are smaller than the numbers from the other group. Could it all be random chance? Sure! But that does kind of hint that there might be something there.
> But that does kind of hint that there might be something there.
The paper does not draw this conclusion. The data you're referencing is too small to be statistically significant.
> Findings: In this open-label randomized clinical trial of high-risk patients with COVID-19 in Malaysia, a 5-day course of oral ivermectin administered during the first week of illness did not reduce the risk of developing severe disease compared with standard of care alone.
> Meaning: The study findings do not support the use of ivermectin for patients with COVID-19.
> Results: Among 490 patients included in the primary analysis (mean [SD] age, 62.5 [8.7] years; 267 women [54.5%]), 52 of 241 patients (21.6%) in the ivermectin group and 43 of 249 patients (17.3%) in the control group progressed to severe disease (relative risk [RR], 1.25; 95% CI, 0.87-1.80; P = .25). For all prespecified secondary outcomes, there were no significant differences between groups.
> Conclusions and Relevance: In this randomized clinical trial of high-risk patients with mild to moderate COVID-19, ivermectin treatment during early illness did not prevent progression to severe disease. The study findings do not support the use of ivermectin for patients with COVID-19.
On the one hand, you've made a rebuttal, quoting the paper. That's good.
On the other hand, you've utterly failed to understand what I'm attempting to say. So that's less good.
> The data you're referencing is too small to be statistically significant.
I explicitly acknowledge this.
>> So on the one hand I completely agree with you on the necessity of having enough people to dodge the problem of random chance. "the law of large numbers" on Wikipedia is good.
That's the acknowledgement.
>> But that does kind of hint that there might be something there.
And here's where I'm saying "if you have these three metrics which are independently all non-significant but they're all trending in the same direction, there might be a 'there' there"
Maybe I didn't say it clearly enough to begin with. I'm not alleging that Ivermectin is COVID Jesus and we all just gotta believe in him in order to be saved. I'm just trying to point out that the data previously quoted should probably get a person's "huh, what's that about?" sense going.
> Could it all be random chance? Sure! But that does kind of hint that there might be something there.
Right, which is why we have studies like this: Early studies showed similar "maybe there's something here" type results, which prompted more studies, which later showed that most likely there wasn't something there.
People also seem to have forgotten that all of the other COVID drug research has progressed significantly in the past two years. Drugs like Paxlovid have indisputably significant effects that leave no room for "maybes" like this and should be ramping up quickly. Even if we were to eventually run a study big enough to find some significant effects of Ivermectin, however small, it's already been left behind by other treatment advances.
For some reason Ivermectin sticks as a political talking point, though, so it continues to be debated to death while everyone in the medical research world has long since moved on to better things.
> which prompted more studies, which later showed that most likely there wasn't something there
Do you have a reference to such a study? I was not aware of any well controlled and appropriately sized studies showing a negative result, but I would be open to reading one.