IDEAS home Printed from https://ideas.repec.org/r/bla/jorssb/v67y2005i3p427-444.html
   My bibliography  Save this item

Geometric representation of high dimension, low sample size data

Citations

Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as


Cited by:

  1. Makoto Aoshima & Kazuyoshi Yata, 2019. "Distance-based classifier by data transformation for high-dimension, strongly spiked eigenvalue models," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 71(3), pages 473-503, June.
  2. Bolivar-Cime, A. & Marron, J.S., 2013. "Comparison of binary discrimination methods for high dimension low sample size data," Journal of Multivariate Analysis, Elsevier, vol. 115(C), pages 108-121.
  3. Chung, Hee Cheol & Ahn, Jeongyoun, 2021. "Subspace rotations for high-dimensional outlier detection," Journal of Multivariate Analysis, Elsevier, vol. 183(C).
  4. Yata, Kazuyoshi & Aoshima, Makoto, 2012. "Effective PCA for high-dimension, low-sample-size data with noise reduction via geometric representations," Journal of Multivariate Analysis, Elsevier, vol. 105(1), pages 193-215.
  5. Haiyan Wang & Michael Akritas, 2009. "Rank tests in heteroscedastic multi-way HANOVA," Journal of Nonparametric Statistics, Taylor & Francis Journals, vol. 21(6), pages 663-681.
  6. Arnab Chakrabarti & Rituparna Sen, 2018. "Some Statistical Problems with High Dimensional Financial data," Papers 1808.02953, arXiv.org.
  7. Ishii, Aki & Yata, Kazuyoshi & Aoshima, Makoto, 2022. "Geometric classifiers for high-dimensional noisy data," Journal of Multivariate Analysis, Elsevier, vol. 188(C).
  8. Kristoffer H. Hellton & Magne Thoresen, 2017. "When and Why are Principal Component Scores a Good Tool for Visualizing High-dimensional Data?," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 44(3), pages 581-597, September.
  9. Marron, J.S., 2017. "Big Data in context and robustness against heterogeneity," Econometrics and Statistics, Elsevier, vol. 2(C), pages 73-80.
  10. Wang, Shao-Hsuan & Huang, Su-Yun & Chen, Ting-Li, 2020. "On asymptotic normality of cross data matrix-based PCA in high dimension low sample size," Journal of Multivariate Analysis, Elsevier, vol. 175(C).
  11. Shin-ichi Tsukada, 2019. "High dimensional two-sample test based on the inter-point distance," Computational Statistics, Springer, vol. 34(2), pages 599-615, June.
  12. Makoto Aoshima & Kazuyoshi Yata, 2014. "A distance-based, misclassification rate adjusted classifier for multiclass, high-dimensional data," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 66(5), pages 983-1010, October.
  13. Modarres, Reza, 2022. "A high dimensional dissimilarity measure," Computational Statistics & Data Analysis, Elsevier, vol. 175(C).
  14. Jianqing Fan & Yuan Liao & Martina Mincheva, 2013. "Large covariance estimation by thresholding principal orthogonal complements," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 75(4), pages 603-680, September.
  15. Yata, Kazuyoshi & Aoshima, Makoto, 2013. "PCA consistency for the power spiked model in high-dimensional settings," Journal of Multivariate Analysis, Elsevier, vol. 122(C), pages 334-354.
  16. A. P. Zubarev, 2017. "On the Ultrametric Generated by Random Distribution of Points in Euclidean Spaces of Large Dimensions with Correlated Coordinates," Journal of Classification, Springer;The Classification Society, vol. 34(3), pages 366-383, October.
  17. Jun Li, 2018. "Asymptotic normality of interpoint distances for high-dimensional data with applications to the two-sample problem," Biometrika, Biometrika Trust, vol. 105(3), pages 529-546.
  18. Yata, Kazuyoshi & Aoshima, Makoto, 2013. "Correlation tests for high-dimensional data using extended cross-data-matrix methodology," Journal of Multivariate Analysis, Elsevier, vol. 117(C), pages 313-331.
  19. Jung, Sungkyu & Sen, Arusharka & Marron, J.S., 2012. "Boundary behavior in High Dimension, Low Sample Size asymptotics of PCA," Journal of Multivariate Analysis, Elsevier, vol. 109(C), pages 190-203.
  20. Leung, Andy & Yohai, Victor & Zamar, Ruben, 2017. "Multivariate location and scatter matrix estimation under cellwise and casewise contamination," Computational Statistics & Data Analysis, Elsevier, vol. 111(C), pages 59-76.
  21. Choi, Hosik & Yeo, Donghwa & Kwon, Sunghoon & Kim, Yongdai, 2011. "Gene selection and prediction for cancer classification using support vector machines with a reject option," Computational Statistics & Data Analysis, Elsevier, vol. 55(5), pages 1897-1908, May.
  22. Pedro Galeano & Daniel Peña, 2019. "Data science, big data and statistics," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 28(2), pages 289-329, June.
  23. Nakayama, Yugo & Yata, Kazuyoshi & Aoshima, Makoto, 2021. "Clustering by principal component analysis with Gaussian kernel in high-dimension, low-sample-size settings," Journal of Multivariate Analysis, Elsevier, vol. 185(C).
  24. Biswas, Munmun & Ghosh, Anil K., 2014. "A nonparametric two-sample test applicable to high dimensional data," Journal of Multivariate Analysis, Elsevier, vol. 123(C), pages 160-171.
  25. Anil K. Ghosh & Munmun Biswas, 2016. "Distribution-free high-dimensional two-sample tests based on discriminating hyperplanes," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 25(3), pages 525-547, September.
  26. Jonathan Gillard & Emily O’Riordan & Anatoly Zhigljavsky, 2023. "Polynomial whitening for high-dimensional data," Computational Statistics, Springer, vol. 38(3), pages 1427-1461, September.
  27. Shuchun Wang & Wei Jiang & Kwok-Leung Tsui, 2010. "Adjusted support vector machines based on a new loss function," Annals of Operations Research, Springer, vol. 174(1), pages 83-101, February.
  28. Modarres, Reza, 2023. "Analysis of distance matrices," Statistics & Probability Letters, Elsevier, vol. 193(C).
  29. Peter Hall & Yvonne Pittelkow & Malay Ghosh, 2008. "Theoretical measures of relative performance of classifiers for high dimensional data with small sample sizes," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 70(1), pages 159-173, February.
  30. Bernard, Carole & Vanduffel, Steven, 2015. "A new approach to assessing model risk in high dimensions," Journal of Banking & Finance, Elsevier, vol. 58(C), pages 166-178.
  31. Bouveyron, Charles & Brunet-Saumard, Camille, 2014. "Model-based clustering of high-dimensional data: A review," Computational Statistics & Data Analysis, Elsevier, vol. 71(C), pages 52-78.
  32. Yugo Nakayama & Kazuyoshi Yata & Makoto Aoshima, 2020. "Bias-corrected support vector machine with Gaussian kernel in high-dimension, low-sample-size settings," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 72(5), pages 1257-1286, October.
  33. Kazuyoshi Yata & Makoto Aoshima, 2020. "Geometric consistency of principal component scores for high‐dimensional mixture models and its application," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 47(3), pages 899-921, September.
  34. Patrick K. Kimes & Yufeng Liu & David Neil Hayes & James Stephen Marron, 2017. "Statistical significance for hierarchical clustering," Biometrics, The International Biometric Society, vol. 73(3), pages 811-821, September.
  35. Paindaveine, Davy, 2009. "On Multivariate Runs Tests for Randomness," Journal of the American Statistical Association, American Statistical Association, vol. 104(488), pages 1525-1538.
  36. Wang, Shao-Hsuan & Huang, Su-Yun, 2022. "Perturbation theory for cross data matrix-based PCA," Journal of Multivariate Analysis, Elsevier, vol. 190(C).
  37. Miecznikowski Jeffrey C. & Gaile Daniel P. & Chen Xiwei & Tritchler David L., 2016. "Identification of consistent functional genetic modules," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 15(1), pages 1-18, March.
  38. Mondal, Pronoy K. & Biswas, Munmun & Ghosh, Anil K., 2015. "On high dimensional two-sample tests based on nearest neighbors," Journal of Multivariate Analysis, Elsevier, vol. 141(C), pages 168-178.
  39. Shen, Dan & Shen, Haipeng & Marron, J.S., 2013. "Consistency of sparse PCA in High Dimension, Low Sample Size contexts," Journal of Multivariate Analysis, Elsevier, vol. 115(C), pages 317-333.
  40. Ursula Laa & Dianne Cook & Stuart Lee, 2020. "Burning Sage: Reversing the Curse of Dimensionality in the Visualization of High-Dimensional Data," Monash Econometrics and Business Statistics Working Papers 36/20, Monash University, Department of Econometrics and Business Statistics.
  41. Bar, Haim & Wells, Martin T., 2023. "On graphical models and convex geometry," Computational Statistics & Data Analysis, Elsevier, vol. 187(C).
  42. Kazuyoshi Yata & Makoto Aoshima, 2012. "Inference on High-Dimensional Mean Vectors with Fewer Observations Than the Dimension," Methodology and Computing in Applied Probability, Springer, vol. 14(3), pages 459-476, September.
  43. Jianqing Fan & Jinchi Lv, 2008. "Sure independence screening for ultrahigh dimensional feature space," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 70(5), pages 849-911, November.
  44. Mao, Guangyu, 2018. "Testing independence in high dimensions using Kendall’s tau," Computational Statistics & Data Analysis, Elsevier, vol. 117(C), pages 128-137.
  45. Yata, Kazuyoshi & Aoshima, Makoto, 2010. "Effective PCA for high-dimension, low-sample-size data with singular value decomposition of cross data matrix," Journal of Multivariate Analysis, Elsevier, vol. 101(9), pages 2060-2077, October.
  46. Jack Jewson & David Rossell, 2022. "General Bayesian loss function selection and the use of improper models," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(5), pages 1640-1665, November.
  47. Filzmoser, Peter & Maronna, Ricardo & Werner, Mark, 2008. "Outlier identification in high dimensions," Computational Statistics & Data Analysis, Elsevier, vol. 52(3), pages 1694-1711, January.
  48. Fionn Murtagh, 2009. "The Remarkable Simplicity of Very High Dimensional Data: Application of Model-Based Clustering," Journal of Classification, Springer;The Classification Society, vol. 26(3), pages 249-277, December.
  49. Makoto Aoshima & Kazuyoshi Yata, 2019. "High-Dimensional Quadratic Classifiers in Non-sparse Settings," Methodology and Computing in Applied Probability, Springer, vol. 21(3), pages 663-682, September.
  50. Mao, Guangyu, 2015. "A note on testing complete independence for high dimensional data," Statistics & Probability Letters, Elsevier, vol. 106(C), pages 82-85.
  51. Lee, Myung Hee, 2012. "On the border of extreme and mild spiked models in the HDLSS framework," Journal of Multivariate Analysis, Elsevier, vol. 107(C), pages 162-168.
  52. Hubeyb Gurdogan & Alec Kercheval, 2021. "Multi Anchor Point Shrinkage for the Sample Covariance Matrix (Extended Version)," Papers 2109.00148, arXiv.org, revised Sep 2021.
  53. Borysov, Petro & Hannig, Jan & Marron, J.S., 2014. "Asymptotics of hierarchical clustering for growing dimension," Journal of Multivariate Analysis, Elsevier, vol. 124(C), pages 465-479.
  54. Niladri Roy Chowdhury & Dianne Cook & Heike Hofmann & Mahbubul Majumder & Eun-Kyung Lee & Amy Toth, 2015. "Using visual statistical inference to better understand random class separations in high dimension, low sample size data," Computational Statistics, Springer, vol. 30(2), pages 293-316, June.
  55. repec:jss:jstsof:47:i05 is not listed on IDEAS
  56. Gerd Christoph & Vladimir V. Ulyanov, 2020. "Second Order Expansions for High-Dimension Low-Sample-Size Data Statistics in Random Setting," Mathematics, MDPI, vol. 8(7), pages 1-28, July.
  57. Jung, Sungkyu, 2018. "Continuum directions for supervised dimension reduction," Computational Statistics & Data Analysis, Elsevier, vol. 125(C), pages 27-43.
  58. Saha, Enakshi & Sarkar, Soham & Ghosh, Anil K., 2017. "Some high-dimensional one-sample tests based on functions of interpoint distances," Journal of Multivariate Analysis, Elsevier, vol. 161(C), pages 83-95.
  59. Davy Paindaveine & Thomas Verdebout, 2013. "Universal Asymptotics for High-Dimensional Sign Tests," Working Papers ECARES ECARES 2013-40, ULB -- Universite Libre de Bruxelles.
  60. Matthieu Stigler & Apratim Dey & Andrew Hobbs & David Lobell, 2022. "With big data come big problems: pitfalls in measuring basis risk for crop index insurance," Papers 2209.14611, arXiv.org.
  61. Paul, Biplab & De, Shyamal K. & Ghosh, Anil K., 2022. "Some clustering-based exact distribution-free k-sample tests applicable to high dimension, low sample size data," Journal of Multivariate Analysis, Elsevier, vol. 190(C).
IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.