Validation accuracy vs Testing accuracyInformation on how value of k in k-fold cross-validation affects resulting accuraciesEstimating the variance of a bootstrap aggregator performance?Inconsistency in cross-validation resultsCross-validation including training, validation, and testing. Why do we need three subsets?My Test accuracy is pretty bad compared to cross-validation accuracyBetter accuracy with validation set than test setFeature selection: is nested cross-validation needed?10-fold cross validation, why having a validation set?Bias-Variance terminology for loss functions in ML vs cross-validation — different things?Is cross-validation better/worse than a third holdout set?

Why CLRS example on residual networks does not follows its formula?

What defenses are there against being summoned by the Gate spell?

TGV timetables / schedules?

Do airline pilots ever risk not hearing communication directed to them specifically, from traffic controllers?

Patience, young "Padovan"

How do I create uniquely male characters?

"You are your self first supporter", a more proper way to say it

DOS, create pipe for stdin/stdout of command.com(or 4dos.com) in C or Batch?

Do any Labour MPs support no-deal?

What would happen to a modern skyscraper if it rains micro blackholes?

Should I join office cleaning event for free?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

When blogging recipes, how can I support both readers who want the narrative/journey and ones who want the printer-friendly recipe?

Validation accuracy vs Testing accuracy

How can bays and straits be determined in a procedurally generated map?

Why is an old chain unsafe?

What typically incentivizes a professor to change jobs to a lower ranking university?

Motorized valve interfering with button?

Is the month field really deprecated?

Is there really no realistic way for a skeleton monster to move around without magic?

If Manufacturer spice model and Datasheet give different values which should I use?

declaring a variable twice in IIFE

Are tax years 2016 & 2017 back taxes deductible for tax year 2018?

How old can references or sources in a thesis be?



Validation accuracy vs Testing accuracy


Information on how value of k in k-fold cross-validation affects resulting accuraciesEstimating the variance of a bootstrap aggregator performance?Inconsistency in cross-validation resultsCross-validation including training, validation, and testing. Why do we need three subsets?My Test accuracy is pretty bad compared to cross-validation accuracyBetter accuracy with validation set than test setFeature selection: is nested cross-validation needed?10-fold cross validation, why having a validation set?Bias-Variance terminology for loss functions in ML vs cross-validation — different things?Is cross-validation better/worse than a third holdout set?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



  1. Training Data - Train the model

  2. Validation Data - Cross validation for model selection

  3. Testing Data - Test the generalisation error.

Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



Thanks.










share|cite|improve this question









$endgroup$


















    2












    $begingroup$


    I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



    1. Training Data - Train the model

    2. Validation Data - Cross validation for model selection

    3. Testing Data - Test the generalisation error.

    Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



    Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



    Thanks.










    share|cite|improve this question









    $endgroup$














      2












      2








      2





      $begingroup$


      I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



      1. Training Data - Train the model

      2. Validation Data - Cross validation for model selection

      3. Testing Data - Test the generalisation error.

      Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



      Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



      Thanks.










      share|cite|improve this question









      $endgroup$




      I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



      1. Training Data - Train the model

      2. Validation Data - Cross validation for model selection

      3. Testing Data - Test the generalisation error.

      Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



      Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



      Thanks.







      machine-learning






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 4 hours ago









      BillyJo_ramblerBillyJo_rambler

      296




      296




















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



          I would like to point out a few things:



          • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


          • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


          • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


          • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


          I would suggest to use the following terminology



          • Training dataset: the data used to fit the model.

          • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

          • Testing dataset: the data used to for other purposes other than training and validating.

          Note that some of these datasets might overlap. If that's a "good" thing or not, it's another question.






          share|cite|improve this answer











          $endgroup$




















            1












            $begingroup$

            @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






            share|cite|improve this answer








            New contributor




            user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "65"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f401696%2fvalidation-accuracy-vs-testing-accuracy%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



              I would like to point out a few things:



              • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


              • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


              • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


              • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


              I would suggest to use the following terminology



              • Training dataset: the data used to fit the model.

              • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

              • Testing dataset: the data used to for other purposes other than training and validating.

              Note that some of these datasets might overlap. If that's a "good" thing or not, it's another question.






              share|cite|improve this answer











              $endgroup$

















                1












                $begingroup$

                There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



                I would like to point out a few things:



                • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


                • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


                • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


                • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


                I would suggest to use the following terminology



                • Training dataset: the data used to fit the model.

                • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

                • Testing dataset: the data used to for other purposes other than training and validating.

                Note that some of these datasets might overlap. If that's a "good" thing or not, it's another question.






                share|cite|improve this answer











                $endgroup$















                  1












                  1








                  1





                  $begingroup$

                  There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



                  I would like to point out a few things:



                  • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


                  • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


                  • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


                  • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


                  I would suggest to use the following terminology



                  • Training dataset: the data used to fit the model.

                  • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

                  • Testing dataset: the data used to for other purposes other than training and validating.

                  Note that some of these datasets might overlap. If that's a "good" thing or not, it's another question.






                  share|cite|improve this answer











                  $endgroup$



                  There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



                  I would like to point out a few things:



                  • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


                  • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


                  • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


                  • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


                  I would suggest to use the following terminology



                  • Training dataset: the data used to fit the model.

                  • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

                  • Testing dataset: the data used to for other purposes other than training and validating.

                  Note that some of these datasets might overlap. If that's a "good" thing or not, it's another question.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited 4 hours ago

























                  answered 4 hours ago









                  nbronbro

                  8111023




                  8111023























                      1












                      $begingroup$

                      @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






                      share|cite|improve this answer








                      New contributor




                      user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$

















                        1












                        $begingroup$

                        @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






                        share|cite|improve this answer








                        New contributor




                        user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






                        $endgroup$















                          1












                          1








                          1





                          $begingroup$

                          @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






                          share|cite|improve this answer








                          New contributor




                          user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          $endgroup$



                          @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.







                          share|cite|improve this answer








                          New contributor




                          user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          share|cite|improve this answer



                          share|cite|improve this answer






                          New contributor




                          user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          answered 44 mins ago









                          user3089485user3089485

                          162




                          162




                          New contributor




                          user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.





                          New contributor





                          user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          user3089485 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Cross Validated!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f401696%2fvalidation-accuracy-vs-testing-accuracy%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              कुँवर स्रोत दिक्चालन सूची"कुँवर""राणा कुँवरके वंशावली"

                              Why is a white electrical wire connected to 2 black wires?How to wire a light fixture with 3 white wires in box?How should I wire a ceiling fan when there's only three wires in the box?Two white, two black, two ground, and red wire in ceiling box connected to switchWhy is there a white wire connected to multiple black wires in my light box?How to wire a light with two white wires and one black wireReplace light switch connected to a power outlet with dimmer - two black wires to one black and redHow to wire a light with multiple black/white/green wires from the ceiling?Ceiling box has 2 black and white wires but fan/ light only has 1 of eachWhy neutral wire connected to load wire?Switch with 2 black, 2 white, 2 ground and 1 red wire connected to ceiling light and a receptacle?

                              चैत्य भूमि चित्र दीर्घा सन्दर्भ बाहरी कडियाँ दिक्चालन सूची"Chaitya Bhoomi""Chaitya Bhoomi: Statue of Equality in India""Dadar Chaitya Bhoomi: Statue of Equality in India""Ambedkar memorial: Centre okays transfer of Indu Mill land"चैत्यभमि